Databases topics
https://www.googlecloudcommunity.com/gc/Databases/bd-p/cloud-database
Databases topicsFri, 22 Nov 2024 12:46:15 GMTcloud-database2024-11-22T12:46:15ZCloudSQL instance failed to update Postgres and now its stuck in maintenance mode.
https://www.googlecloudcommunity.com/gc/Databases/CloudSQL-instance-failed-to-update-Postgres-and-now-its-stuck-in/m-p/834238#M3784
<P>We had a routine database version upgrade from Postgres 15 to Postgres 17. 21 instances upgraded without issues but one did not.</P><P><STRONG>Instance information:</STRONG></P><P><!-- StartFragment --></P><UL><LI><SPAN>db-custom-1-3840</SPAN></LI><LI><!-- StartFragment --><DIV><SPAN>maintenanceVersion: POSTGRES_15_8.R20240910.01_02</SPAN></DIV><!-- EndFragment --></LI><LI><SPAN>10gb sdd</SPAN></LI><LI><SPAN>HA mode</SPAN></LI><LI><SPAN>database size approx 4 GB</SPAN></LI><LI><SPAN>located in europe-west3-c</SPAN></LI></UL><P><!-- EndFragment --></P><P><STRONG>Symptoms</STRONG>:</P><P>After initiating the upgrade ~15min passed and then the error was displayed in Operations and logs:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="akorolkovs_0-1731781507882.png" style="width: 400px;"><img src="https://www.googlecloudcommunity.com/gc/image/serverpage/image-id/130636iF90ABBA18BF1BAC1/image-size/medium/is-moderation-mode/true?v=v2&px=400" role="button" title="akorolkovs_0-1731781507882.png" alt="akorolkovs_0-1731781507882.png" /></span></P><P>The instance itself shows that it is under maintenance.</P><P><STRONG>Troubleshooting:</STRONG></P><OL><LI>All actions such as restart/patch/failover from WEB-UI and gcloud not working which is expected for instance that is in maintenance mode.</LI><LI>Looking at logs did not show any new information. Logs ended on instance shutdown.</LI><LI>After waiting a few hours backup restore was initiated. In the same region and same specifications new instance was created. The process did not finish in two hours. This time no errors are visible. Now new instance is also stuck in maintenance mode.</LI><LI>Then, the third instance was created now in <SPAN>europe-north1-b. Same result as with the previous attempt. It's stuck for 20 at this time.</SPAN></LI></OL><P><SPAN>Edit: The second restore attempt failed with the same error in Operations and logs after 2 hours.</SPAN></P>Sat, 16 Nov 2024 20:05:59 GMThttps://www.googlecloudcommunity.com/gc/Databases/CloudSQL-instance-failed-to-update-Postgres-and-now-its-stuck-in/m-p/834238#M3784a-korolkovs2024-11-16T20:05:59ZIs Babelfish for PostgreSQL Possible on Google Cloud SQL?
https://www.googlecloudcommunity.com/gc/Databases/Is-Babelfish-for-PostgreSQL-Possible-on-Google-Cloud-SQL/m-p/833350#M3780
<P>Hi everyone,</P><P>Has anyone tried using <A href="https://babelfishpg.org/" target="_blank" rel="noopener">Babelfish for PostgreSQL</A> on Cloud SQL? From what I understand, it probably wouldn’t work since Babelfish requires some changes to PostgreSQL’s source code, which managed services like Cloud SQL don’t usually allow. But I’m curious, has anyone found a way to make it work or come across a Cloud SQL version that supports it?</P><P>We’re thinking it could help us migrate SQL Server-based applications more gradually. The idea would be to use Babelfish to get things running on PostgreSQL, then slowly update SQL Server-specific queries and procedures to ANSI SQL or PostgreSQL-native ones, eventually phasing out Babelfish completely.</P><P>Would love to hear if anyone has explored this or has thoughts on whether it’s possible. <span class="lia-unicode-emoji" title=":smiling_face_with_heart_eyes:">😍</span><BR /><BR />Thanks!</P>Thu, 14 Nov 2024 18:54:46 GMThttps://www.googlecloudcommunity.com/gc/Databases/Is-Babelfish-for-PostgreSQL-Possible-on-Google-Cloud-SQL/m-p/833350#M3780jfbaro2024-11-14T18:54:46ZRestore an instance of PostgreSQL 11
https://www.googlecloudcommunity.com/gc/Databases/Restore-an-instance-of-PostgreSQL-11/m-p/832288#M3772
<P>Hi, i need to restore a PostgreSQL 11 database in my gcloud in a new instance, but when i proceed to create the instance, in the selector of the database don't show the database.</P><P>Someone had the same problem?</P><P>Thanks</P>Tue, 12 Nov 2024 20:54:20 GMThttps://www.googlecloudcommunity.com/gc/Databases/Restore-an-instance-of-PostgreSQL-11/m-p/832288#M3772SantiagoMsur2024-11-12T20:54:20Zpgvectorscale support in Cloud SQL for Postgres
https://www.googlecloudcommunity.com/gc/Databases/pgvectorscale-support-in-Cloud-SQL-for-Postgres/m-p/832028#M3771
<P>Hi all.</P><P>Curious if the Cloud SQL team has any plans to support <A href="https://github.com/timescale/pgvectorscale?tab=readme-ov-file#installation" target="_blank" rel="noopener">pgvectorscale</A> for Postgres?<BR /><BR />Many thanks in advance!</P>Tue, 12 Nov 2024 13:51:32 GMThttps://www.googlecloudcommunity.com/gc/Databases/pgvectorscale-support-in-Cloud-SQL-for-Postgres/m-p/832028#M3771rojaleh2024-11-12T13:51:32Zdynamic schema evaulavation in biglake ICERG table
https://www.googlecloudcommunity.com/gc/Databases/dynamic-schema-evaulavation-in-biglake-ICERG-table/m-p/831385#M3770
<P>hi Team,</P><P>as per the iceberg documentation the product will supports dynamic schema evaluvation.</P><P>we are looking for the solution to load the streaming files from GCS to BQ biglake iceberg table using spark stored procedure . </P><P>as per the GCP biglake iceberg documentation we have Spark SQL only to load the data into iceberg table . is there any alternative way to load the data into biglake iceberg table with dynamic schema evaluation .</P><P> </P>Mon, 11 Nov 2024 10:13:07 GMThttps://www.googlecloudcommunity.com/gc/Databases/dynamic-schema-evaulavation-in-biglake-ICERG-table/m-p/831385#M3770pavan792reddy2024-11-11T10:13:07ZIs it possible to select databases? - Database migration service
https://www.googlecloudcommunity.com/gc/Databases/Is-it-possible-to-select-databases-Database-migration-service/m-p/831447#M3768
<P>Is it possible to move a subset of databases using Google's <STRONG>Database Migration Service (DMS)?</STRONG></P><P>We are not interested in migrating the entire database at once. We won’t be moving databases to the cloud in large batches, but rather individually.</P><P>The source database is MySQL and destination is Cloud SQL.</P><P>As I read it in the documentation, it is not possible. Are there any workarounds for this?</P><P>Thanks in advance!</P>Mon, 11 Nov 2024 13:11:42 GMThttps://www.googlecloudcommunity.com/gc/Databases/Is-it-possible-to-select-databases-Database-migration-service/m-p/831447#M3768MiloRo1ey2024-11-11T13:11:42ZGoogle Cloud
https://www.googlecloudcommunity.com/gc/Databases/Google-Cloud/m-p/830518#M3765
<P>We're having trouble with the data export functionality. Typically, our data export takes about 10 minutes to complete. However, since yesterday, export goods have been continuously uploaded without being completed. Can anyone help me how to solve this problem?<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2024-11-08 121934.png" style="width: 999px;"><img src="https://www.googlecloudcommunity.com/gc/image/serverpage/image-id/129904iE39755D98ECBAE2C/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="Screenshot 2024-11-08 121934.png" alt="Screenshot 2024-11-08 121934.png" /></span></P>Fri, 08 Nov 2024 07:25:37 GMThttps://www.googlecloudcommunity.com/gc/Databases/Google-Cloud/m-p/830518#M3765Tobi1892024-11-08T07:25:37ZCtrl + C exits the MySQL client
https://www.googlecloudcommunity.com/gc/Databases/Ctrl-C-exits-the-MySQL-client/m-p/829337#M3761
<P>If I use the Cloud SQL for MySQL client, hitting Ctrl + C directly exits the shell instead of doing nothing. This is annoying behavior since most developers are used to hitting Ctrl + C for exiting out of an unfinished/improperly typed query.</P><P>Steps to reproduce:</P><P>Use gcloud CLI to login to MySQL:</P><P>gcloud sql connect <name> --user=<username> --project <gcloud-project></P><P>Select your database</P><P>Hit Ctrl + C</P><P>Observe that the client exits.</P><P>References: <A href="https://dba.stackexchange.com/a/132936" target="_blank">https://dba.stackexchange.com/a/132936</A></P>Wed, 06 Nov 2024 00:50:03 GMThttps://www.googlecloudcommunity.com/gc/Databases/Ctrl-C-exits-the-MySQL-client/m-p/829337#M3761sohamssd2024-11-06T00:50:03ZChatGpt Api Connection With Google Cloud Database Issues
https://www.googlecloudcommunity.com/gc/Databases/ChatGpt-Api-Connection-With-Google-Cloud-Database-Issues/m-p/829010#M3759
<P>We are encountering multiple issues while trying to establish a connection between our Cloud Run service and Cloud SQL database using API Gateway. Our goal is to set up a secure, functional API that will allow external access to the database. Here’s a detailed summary of what we have done and the problems we’re facing.</P><H3>Project Overview</H3><UL><LI><STRONG>Objective:</STRONG> Establish a secure API connection to our Cloud SQL database so our external application (using API Gateway) can interact with it.</LI><LI><STRONG>Setup:</STRONG> We’re using Cloud SQL (MySQL) for data storage, Cloud Run for service hosting, and API Gateway for secure access. The final goal is to enable API requests to interact with the Cloud SQL database through Cloud Run, with api_key security in place.</LI></UL><H3>Steps Completed So Far</H3><OL><LI><P><STRONG>API Gateway Configuration:</STRONG></P><UL><LI>Configured an API Gateway (LEX API Gateway) with a swagger.yaml file.</LI><LI>Set up the x-google-backend extension to route requests from the Gateway to our Cloud Run service, with the following details:<UL><LI><STRONG>Protocol:</STRONG> http/1.1</LI><LI><STRONG>Address:</STRONG> Cloud Run service URL.</LI></UL></LI><LI>Configured security with an api_key requirement.</LI></UL></LI><LI><P><STRONG>Cloud SQL Database Setup:</STRONG></P><UL><LI><STRONG>Instance Name:</STRONG> sqltiger</LI><LI><STRONG>Databases:</STRONG> Two databases named LexComms and Tiger.</LI><LI><STRONG>Public IP:</STRONG> Configured public IP access for the Cloud SQL instance.</LI><LI><STRONG>Permissions:</STRONG> Granted relevant roles (Cloud SQL Admin, Cloud SQL Client, Cloud SQL Viewer) to the service account used for API Gateway and Cloud Run.</LI></UL></LI><LI><P><STRONG>Cloud Run Configuration:</STRONG></P><UL><LI><STRONG>Environment Variables:</STRONG> Configured DB connection details as environment variables (DB_HOST, DB_USER, DB_PASSWORD, DB_NAME).</LI><LI><STRONG>Cloud SQL Connections:</STRONG> Set up the Cloud SQL instance connection directly within Cloud Run.</LI><LI><STRONG>Service Account:</STRONG> Changed the service account for Cloud Run to the one with API Gateway permissions (lex-sql-access-api).</LI><LI><STRONG>Port Configuration:</STRONG> Verified that Cloud Run is set to use port 8080.</LI></UL></LI></OL><H3>Current Issues</H3><OL><LI><P><STRONG>Database Connection Refusal:</STRONG></P><UL><LI><STRONG>Error Message:</STRONG> When sending a request via curl to the API Gateway, we receive:<DIV class=""><DIV class="">json</DIV><DIV class=""><DIV class=""><DIV class=""><SPAN class="">Copy code</SPAN></DIV></DIV></DIV><DIV class=""><SPAN class="">{</SPAN><SPAN class="">"error"</SPAN><SPAN class="">:</SPAN><SPAN class="">"2003: Can't connect to MySQL server on 'None:3306' (Errno 111: Connection refused)"</SPAN><SPAN class="">}</SPAN></DIV></DIV></LI><LI>We have verified that the database is accessible via its public IP and credentials.</LI></UL></LI><LI><P><STRONG>API Key and Access Issues:</STRONG></P><UL><LI><STRONG>Error Message (Previously):</STRONG> Initially, we encountered a PERMISSION_DENIED error, which was resolved by enabling the API service in the project.</LI><LI><STRONG>Current Status:</STRONG> We still face issues where API requests do not seem to access the database, even after setting the API key and verifying that it is active and unrestricted.</LI></UL></LI><LI><P><STRONG>Connection Testing from Local Machine:</STRONG></P><UL><LI>We attempted to connect to the database directly using the mysql command from a local environment but encountered issues due to command recognition (Windows environment). However, the database should theoretically be reachable via its public IP.</LI></UL></LI><LI><P><STRONG>Cloud SQL Connection Configuration Uncertainty:</STRONG></P><UL><LI>We are uncertain whether the Cloud Run environment correctly recognizes the Cloud SQL connection, despite setting up the connection details both as environment variables and in the Cloud SQL connections section within Cloud Run.</LI></UL></LI></OL><H3>Request for Assistance</H3><P>Could you please guide us on the following points?</P><OL><LI><STRONG>Configuration Check:</STRONG> Could you verify if our configuration of API Gateway, Cloud Run, and Cloud SQL aligns with best practices, particularly in relation to securing database connections through API Gateway?</LI><LI><STRONG>Cloud SQL Direct Connection (Cloud Run):</STRONG> Are there any additional configurations required to ensure that Cloud Run can securely connect to Cloud SQL without issues?</LI><LI><STRONG>Debugging Tips:</STRONG> If the configuration appears correct, could you suggest any debugging steps we could take to identify where the connection might be failing?</LI></OL><P>Thank you for your assistance with this complex setup.</P>Tue, 05 Nov 2024 13:40:11 GMThttps://www.googlecloudcommunity.com/gc/Databases/ChatGpt-Api-Connection-With-Google-Cloud-Database-Issues/m-p/829010#M3759Ozren2024-11-05T13:40:11Zfirestore: Storage charges do not go down after deleting data
https://www.googlecloudcommunity.com/gc/Databases/firestore-Storage-charges-do-not-go-down-after-deleting-data/m-p/828666#M3756
<P>Hello,</P><P>I deleted a large amount of data from Firestore in our test project 5 days ago.</P><P>But the Cloud Firestore Storage cost in the billing report does not decrease.</P><P>Even though I deleted about 2 million documents, the cost increased instead of decreasing.</P><P>I also saw an article that said it could be because Firestore's garbage collection is delayed, but it's strange that it hasn't changed even after 5 days.</P><P>Did I miss something?</P><P>Thanks for your time and attention.</P>Mon, 04 Nov 2024 18:38:37 GMThttps://www.googlecloudcommunity.com/gc/Databases/firestore-Storage-charges-do-not-go-down-after-deleting-data/m-p/828666#M3756kyle_int2024-11-04T18:38:37ZReplica DB Max Connections Display Error - Max_connections Value Mismatch
https://www.googlecloudcommunity.com/gc/Databases/Replica-DB-Max-Connections-Display-Error-Max-connections-Value/m-p/828353#M3754
<H3><SPAN>Environment Details:</SPAN></H3><P><SPAN>We are experiencing an issue in our Google Cloud environment related to </SPAN><SPAN><STRONG>max_connections</STRONG></SPAN><SPAN> settings displayed for replica databases. Specifically, the primary database and its replica have different </SPAN><SPAN><STRONG>max_connections</STRONG></SPAN><SPAN> values configured, but the monitoring trend graph is displaying both of them with the same maximum connections value. This discrepancy does not align with the actual database settings.</SPAN></P><H3><SPAN>Issue Description:</SPAN></H3><UL><LI><P><SPAN><STRONG>Primary Database</STRONG></SPAN><SPAN>: Configured with </SPAN><SPAN><STRONG>max_connections</STRONG></SPAN><SPAN> set to </SPAN><SPAN><STRONG>10,000</STRONG></SPAN><SPAN>.</SPAN></P></LI><LI><P><SPAN><STRONG>Replica Database</STRONG></SPAN><SPAN>: Configured with </SPAN><SPAN><STRONG>max_connections</STRONG></SPAN><SPAN> set to </SPAN><SPAN><STRONG>6,000</STRONG></SPAN><SPAN>.</SPAN></P></LI><LI><P><SPAN><STRONG>Trend Graph Issue</STRONG></SPAN><SPAN>: In Cloud Monitoring, the trend graph displays both the primary and replica databases as having </SPAN><SPAN><STRONG>10,000</STRONG></SPAN><SPAN> maximum connections, even though the replica is configured with a lower value.</SPAN></P></LI></UL><P><SPAN>This issue seems to be impacting the </SPAN><SPAN><STRONG>accuracy of monitoring data</STRONG></SPAN><SPAN> for all master-replica database configurations in our environment.</SPAN></P><H3><SPAN>Troubleshooting Steps Performed:</SPAN></H3><UL><LI><P><SPAN>Verified that each database's </SPAN><SPAN><STRONG>max_connections</STRONG></SPAN><SPAN> settings are correct (i.e., </SPAN><SPAN><STRONG>10,000</STRONG></SPAN><SPAN> for the primary and </SPAN><SPAN><STRONG>6,000</STRONG></SPAN><SPAN> for the replica).</SPAN></P></LI><LI><P><SPAN>Checked the data source in </SPAN><SPAN><STRONG>Cloud Monitoring</STRONG></SPAN><SPAN> and confirmed that the values displayed do not match the actual configuration.</SPAN></P></LI></UL><H3><SPAN>Additional Information:</SPAN></H3><UL><LI><P><SPAN>This issue appears to be a </SPAN><SPAN><STRONG>UI display error</STRONG></SPAN><SPAN> within Cloud Monitoring.</SPAN></P></LI><LI><P><SPAN>It can be replicated by following similar steps in a comparable master-replica setup.</SPAN></P></LI></UL><H3><SPAN>Assistance Request:</SPAN></H3><P><SPAN>We kindly request any insights or recommendations on how to ensure that </SPAN><SPAN><STRONG>Cloud Monitoring</STRONG></SPAN><SPAN> accurately displays the </SPAN><SPAN><STRONG>max_connections</STRONG></SPAN><SPAN> values for replica databases. Is this a known issue with the display in monitoring, or is there a specific configuration step we might be missing?</SPAN></P><P><SPAN>Any help or guidance would be greatly appreciated.</SPAN></P><P><SPAN><STRONG>Thank you!</STRONG></SPAN></P><P>Best Regards <BR />David</P>Mon, 04 Nov 2024 06:43:31 GMThttps://www.googlecloudcommunity.com/gc/Databases/Replica-DB-Max-Connections-Display-Error-Max-connections-Value/m-p/828353#M3754davidhsu2024-11-04T06:43:31ZShould I make two cloud sql proxy for read replica?
https://www.googlecloudcommunity.com/gc/Databases/Should-I-make-two-cloud-sql-proxy-for-read-replica/m-p/828351#M3753
<P>Hi.</P><P>I created Cloud SQL instance and successfully connected to my gke pod with cloud sql proxy sidecar container in reference to the below link</P><P><A href="https://cloud.google.com/sql/docs/postgres/connect-instance-kubernetes?hl=ko" target="_blank">https://cloud.google.com/sql/docs/postgres/connect-instance-kubernetes?hl=ko</A></P><P>And I created read replica instance, Should I make another cloud sql proxy for connecting read replica?</P><P>And If I want to make load balancer for multiple read replicas, is it only way to make HAproxy in reference to the below link?</P><P><A href="https://cloud.google.com/blog/products/databases/using-haproxy-to-scale-read-only-workloads-on-cloud-sql-for-postgresql?hl=en" target="_blank">https://cloud.google.com/blog/products/databases/using-haproxy-to-scale-read-only-workloads-on-cloud-sql-for-postgresql?hl=en</A></P><P>Thanks for helping me.</P>Mon, 04 Nov 2024 06:37:43 GMThttps://www.googlecloudcommunity.com/gc/Databases/Should-I-make-two-cloud-sql-proxy-for-read-replica/m-p/828351#M3753dong-gwan2024-11-04T06:37:43ZFirestore Indexes Disappeared
https://www.googlecloudcommunity.com/gc/Databases/Firestore-Indexes-Disappeared/m-p/827776#M3749
<P>We have two environments: staging and production.</P><P>I recently deployed to our staging server, and got some errors, '<SPAN>The query requires an index'. This I fixed by following the link it provides. </SPAN><SPAN>Because I knew we'd be deploying the same code to production later too, I adjusted the link which took me to add the same keys to production as well. This *appeared* to work...</SPAN></P><P><SPAN>...until I just deployed production yesterday, and now I'm getting the same errors. After reviewing the database, the indexes were deleted.</SPAN></P><P>Why would they have been deleted?</P><P>EDIT:</P><P>Worse than this, the indexes on the staging Firestore were also deleted, and now I'm getting the same errors on staging too. Is there a way to make indexes permanent? Or does Firestore require users to add them every few days..?</P>Fri, 01 Nov 2024 14:26:54 GMThttps://www.googlecloudcommunity.com/gc/Databases/Firestore-Indexes-Disappeared/m-p/827776#M3749CodeSmith322024-11-01T14:26:54ZCloud SQL service account connection limit
https://www.googlecloudcommunity.com/gc/Databases/Cloud-SQL-service-account-connection-limit/m-p/826391#M3740
<P>Despite hikari cp steps limits are in place. Is there a way to limit the google service account IAM connection to google cloudsql(consumer) in postgres ? KCC env; we are trying to consume external database resources </P>Tue, 29 Oct 2024 17:31:05 GMThttps://www.googlecloudcommunity.com/gc/Databases/Cloud-SQL-service-account-connection-limit/m-p/826391#M3740jagkoth2024-10-29T17:31:05ZHow to create PSC endpoint against Memory Store (Redis cluster)
https://www.googlecloudcommunity.com/gc/Databases/How-to-create-PSC-endpoint-against-Memory-Store-Redis-cluster/m-p/826139#M3738
<P>Hi,</P><P>We are trying to use Memory store (redis cluster). But we are not able to setup PSC endpoint against it.</P><P>Can you please guide me with the steps for the same? We are not able to see PSC attachment as well in the Memstore cluster. But if I go to Private service connect, it shows private service connect against the private ips. Being managed service, we prefer to use PSC only and not limit us in VPC peering etc.</P><P>Thanks & Regards</P><P>Amit</P><P> </P>Tue, 29 Oct 2024 09:45:30 GMThttps://www.googlecloudcommunity.com/gc/Databases/How-to-create-PSC-endpoint-against-Memory-Store-Redis-cluster/m-p/826139#M3738amitkhosla2024-10-29T09:45:30ZConnecting DAG to MSSQL Data base in another Project
https://www.googlecloudcommunity.com/gc/Databases/Connecting-DAG-to-MSSQL-Data-base-in-another-Project/m-p/824788#M3730
<P>We have Created a Public Cloud Composer Environment in Project1 and Written Some DAGs.<BR />In One of the DAG we want to connect to a MSSQL - Database . The Database Needs the Client Ip whitelisted before connecting .<BR />Both the database and composer environment are in default networks of their respective projects.<BR />We are able to connect another databases which are in the same project using dags.<BR />What have we Tried</P><UL><LI>whitelisting the Public IP of the GKE did not work.</LI><LI> whitelisting Ip range of the GKE pods did not work.</LI><LI>whitelisted the Ip of the default NAT connection also .</LI></UL><P>We are not able to understand how to Connect to the database. </P>Fri, 25 Oct 2024 10:00:23 GMThttps://www.googlecloudcommunity.com/gc/Databases/Connecting-DAG-to-MSSQL-Data-base-in-another-Project/m-p/824788#M3730Omingole2024-10-25T10:00:23ZUnable to run pg_repack due to superuser error
https://www.googlecloudcommunity.com/gc/Databases/Unable-to-run-pg-repack-due-to-superuser-error/m-p/824480#M3726
<P>Hello, I am trying to run `pg_repack` on one of my databases, but I'm stuck by this error message I get when trying to run the extension. </P><P>```</P><P>pg_repack --dry-run -h xxx.xxx.xx.xx -p 5432 -U postgres -d db_name -t table_name<BR />INFO: Dry run enabled, not executing repack<BR />Password:<BR />ERROR: pg_repack failed with error: You must be a superuser to use pg_repack</P><P>```<BR /><BR />I followed the guide here <A href="https://cloud.google.com/sql/docs/postgres/extensions#pg_repack" target="_blank" rel="noopener">https://cloud.google.com/sql/docs/postgres/extensions#pg_repack</A> on how to setup and run the extensions. The user I'm using has the <SPAN>cloudsqlsuperuser role and as far as I can tell has the correct permissions for the database in question too. I'm at a loss on what triage or debugging steps to take next. <BR /><BR />For additional context I recently upgraded this db to PGv16.4. I am using pg_repack v 1.5.0 on my client, and in the database. </SPAN></P>Thu, 24 Oct 2024 17:51:07 GMThttps://www.googlecloudcommunity.com/gc/Databases/Unable-to-run-pg-repack-due-to-superuser-error/m-p/824480#M3726foo2024-10-24T17:51:07ZDMS large database
https://www.googlecloudcommunity.com/gc/Databases/DMS-large-database/m-p/823147#M3723
<P>Hi, I'm starting with a migration of a large database from onpremise postgresql to SQL Cloud</P><P>I've migrated successfully our internal tests database, but now we are in the challenge to migrate production database</P><P>the size is approx 800gb, and here my questions</P><P>Is supportable this size? any concern?</P><P>where is located the dump file in the source machine, because I need to consider it on the disks size.</P><P>Any help will we highly appreciated</P><P> </P>Tue, 22 Oct 2024 13:14:55 GMThttps://www.googlecloudcommunity.com/gc/Databases/DMS-large-database/m-p/823147#M3723gustavoszrebka2024-10-22T13:14:55ZQuestions about migrating from PostgeSQL to Spanner and how geo-partitioning work.
https://www.googlecloudcommunity.com/gc/Databases/Questions-about-migrating-from-PostgeSQL-to-Spanner-and-how-geo/m-p/822418#M3717
<P>I'm looking to migrate a PostgreSQL database to Spanner to make use of the geo-partitioning feature but there are a few things I'm unsure about. Can someone help me with these questions please:</P><P>1/ How should I handle "special" data types like Postgis `<SPAN>geography`? What will the migration tool (<A href="https://github.com/GoogleCloudPlatform/spanner-migration-tool" target="_blank">https://github.com/GoogleCloudPlatform/spanner-migration-tool</A>) turn them into?</SPAN></P><P><SPAN>2/ Can I move a row between geo-partitions?</SPAN></P><P><SPAN>3/ Does full text search work with geo-partitions?</SPAN></P><P><SPAN>4/ If I have a query like `SELECT * FROM table WHERE some_id = 1` and the `some_id` column is indexed. Does Spanner have to go through all geo-partitions to find it? Or it has a way to know which partition to look for?</SPAN></P>Mon, 21 Oct 2024 03:32:58 GMThttps://www.googlecloudcommunity.com/gc/Databases/Questions-about-migrating-from-PostgeSQL-to-Spanner-and-how-geo/m-p/822418#M3717bubiche2024-10-21T03:32:58ZCan't connect via cloudsqlproxy with new SQL instance without whitelisting IP
https://www.googlecloudcommunity.com/gc/Databases/Can-t-connect-via-cloudsqlproxy-with-new-SQL-instance-without/m-p/821629#M3713
<P>Hey there, </P><P>After years of using CloudSQL for MySQL with the cloudsqlproxy with several SQL instances I created a new SQL instance and suddenly faced the problem that I can't connect to the instance with a running cloudsqlproxy.</P><P>After a while I found out that I can connect when I whitelist my IP, which I never needed to do before in other instances. The GCP console in SQL->Connections clearly states "an authorised network OR the Cloud SQL Proxy" (see image attached below).</P><P>How do I connect?</P><P>- Install cloudsqlproxy on OS X via brew (latest version)</P><P>- cloudsqlproxy --projects=... -dir=<localdir></P><P>- connect with my MySQL tool of choice using the unix socket path</P><P> </P><P>How does the issue show up?</P><P>- I connect to the cloudsqlproxy successfully and the relevant unix socket path is printed out by cloudsqlproxy</P><P>- when connecting to the instance via my MySQL tool of choice it states that I have a username/password issue</P><P>- the cloudsql proxy shows a connection attempt</P><P>- I authorize my IP either by adding it to authorized networks manually or by doing `gcloud sql connect`</P><P>- I repeat connecting via my MySQL tool without disconnecting the cloudsqlproxy -> connection works</P><P> </P><P>My CloudSQL MySQL 8.4 instance network setup:</P><P>- private IP & public IP (with default network)</P><P>- public IP without authorised networks (when I add my IP here it works, but it shouldn't be necessary)</P><P>- SSL encryption disabled because I am only using CloudSQL Proxy to connect</P><P>- private path disabled</P><P> </P><P>Did anyone also have that issue or does anyone know how I can fix it? The big problem I face is that there are users who need to use the instance via the cloudsqlproxy which don't have a static IP.</P><P> </P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AndreasM_0-1729236604816.png" style="width: 400px;"><img src="https://www.googlecloudcommunity.com/gc/image/serverpage/image-id/127975iFA276C7D50888BA8/image-size/medium/is-moderation-mode/true?v=v2&px=400" role="button" title="AndreasM_0-1729236604816.png" alt="AndreasM_0-1729236604816.png" /></span></P><P> </P>Fri, 18 Oct 2024 07:46:06 GMThttps://www.googlecloudcommunity.com/gc/Databases/Can-t-connect-via-cloudsqlproxy-with-new-SQL-instance-without/m-p/821629#M3713AndreasM2024-10-18T07:46:06Z