Fixed
Status Update
Comments
ar...@google.com <ar...@google.com> #2
Connecting to Cloud SQL from Cloud Functions is currently not supported, as the UNIX socket does not exist (causing ENOENT) and there is no defined IP range to whitelist (causing ETIMEDOUT). One possibility is to whitelist 0.0.0.0/0 from the Cloud SQL instance but this is not recommended for security reasons.
I will treat this as a feature request though, as there is interest in having this supported.
I will treat this as a feature request though, as there is interest in having this supported.
[Deleted User] <[Deleted User]> #3
Thanks for responding.
Do you have any roadmap for this.
It would be good to point to this issue in the documentation (as a note) as it seems it is a big feature missing - not be able to use Cloud SQL. Most people would expect this to work.
Thanks.
Do you have any roadmap for this.
It would be good to point to this issue in the documentation (as a note) as it seems it is a big feature missing - not be able to use Cloud SQL. Most people would expect this to work.
Thanks.
ni...@enplore.com <ni...@enplore.com> #4
I would love to see this as well
s2...@gmail.com <s2...@gmail.com> #5
Count me as VERY interested!
na...@nathanwaters.com <na...@nathanwaters.com> #6
Want
sp...@rajhradice.net <sp...@rajhradice.net> #7
+1
zh...@gmail.com <zh...@gmail.com> #8
+1
ja...@gmail.com <ja...@gmail.com> #9
Having the same problem.
Here is a tip. We are going to deploy a REST API in front of our Cloud SQL via App Engine or Compute Engine, then let the cloud functions call the REST API instead, we won't need to open up0.0.0.0/0 . In fact all of our traffic is expected never egress as it will be using internal IP's for the REST API instance.
Here is a tip. We are going to deploy a REST API in front of our Cloud SQL via App Engine or Compute Engine, then let the cloud functions call the REST API instead, we won't need to open up
ni...@enplore.com <ni...@enplore.com> #10
Je...@gochin.com: Remember that the Cloud Functions don't live in the internal IP segments of Compute Engine, but on the internal interface. So your REST API needs to be open to the public for this to work (see https://issuetracker.google.com/issues/36859738 )
pu...@gmail.com <pu...@gmail.com> #11
+100
si...@gmail.com <si...@gmail.com> #12
+1
re...@gmail.com <re...@gmail.com> #13
+1
eq...@gmail.com <eq...@gmail.com> #14
+1
to...@gmail.com <to...@gmail.com> #15
+1
ma...@gmail.com <ma...@gmail.com> #16
+1
ga...@eforall.org <ga...@eforall.org> #17
+1
sh...@gmail.com <sh...@gmail.com> #18
+1
aa...@brethorsting.com <aa...@brethorsting.com> #19
+1
[Deleted User] <[Deleted User]> #20
+1
co...@gmail.com <co...@gmail.com> #21
+1
an...@gmail.com <an...@gmail.com> #22
big plus one for this seems an obvious integration to add
al...@gmail.com <al...@gmail.com> #23
+1
[Deleted User] <[Deleted User]> #24
I was shocked to learn after I'd put an application together that it didn't work because of this. Honestly, if I'd known that something this basic was missing from Cloud Functions I'd have gone with one of the competitors. For now I'll have to figure out something, but for my future applications please let Cloud Functions access Cloud SQL!
sh...@gmail.com <sh...@gmail.com> #25
+1
[Deleted User] <[Deleted User]> #26
+1
wr...@gmail.com <wr...@gmail.com> #27
+1
bl...@google.com <bl...@google.com>
il...@gmail.com <il...@gmail.com> #28
+1
[Deleted User] <[Deleted User]> #29
+1
er...@gmail.com <er...@gmail.com> #30
Please enable this functionality!
ra...@gmail.com <ra...@gmail.com> #31
+1
ae...@vbcenter.nu <ae...@vbcenter.nu> #32
+1
ga...@benchmarkurbanism.com <ga...@benchmarkurbanism.com> #33
+1
wa...@gmail.com <wa...@gmail.com> #34
+1
pi...@eqtpartners.com <pi...@eqtpartners.com> #35
+1
sa...@gmail.com <sa...@gmail.com> #36
+1
se...@webloft.io <se...@webloft.io> #37
+1
[Deleted User] <[Deleted User]> #38
+1
[Deleted User] <[Deleted User]> #39
We can connect to Cloud Spanner but the pricing is crazy for doing development.
my...@gmail.com <my...@gmail.com> #40
I developed the app and tested in the functions (offline/local) environment. I can successfully do CRUD on my Cloud SQL. But then I deploy to function app and there is no way to auth my function to the database.. I only have a firewall like control.. And I have to blow it open as the IP is not going to be stable.. Oh well.. This shouldn't be like it is! please fix asap.
re...@gmail.com <re...@gmail.com> #41
+1
ju...@gmail.com <ju...@gmail.com> #42
+1
pi...@gmail.com <pi...@gmail.com> #43
Hi All, This works fine for me. using the mysql npm package.
// index.js
var mysql = require('mysql');
exports.writeToSQL = function writeToSQL(req, res) {
if (req.body === undefined) {
res.status(400).send('No message defined!');
} else {
var connection = mysql.createConnection({
host: ‘<IP OF THE CLOUD SQL’>,
user: ‘<NEW DB USER>’,
password: '<NEW DB PWD>',
database: '<DB USER>'
});
connection.connect();
var values = {DT_CREATED: new Date(), MESSAGE: JSON.stringify(req.body), SOURCE: req.ip };
connection.query('INSERT INTO EVENTS SET ?', values, function(error, results, fields) {
if (error) console.log(error);
else console.log("inserted successfully");
});
connection.end();
res.status(200).send('Success: ' + req.body.message);
}
};
// package.json
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"mysql": "^2.13.0"
}
}
// index.js
var mysql = require('mysql');
exports.writeToSQL = function writeToSQL(req, res) {
if (req.body === undefined) {
res.status(400).send('No message defined!');
} else {
var connection = mysql.createConnection({
host: ‘<IP OF THE CLOUD SQL’>,
user: ‘<NEW DB USER>’,
password: '<NEW DB PWD>',
database: '<DB USER>'
});
connection.connect();
var values = {DT_CREATED: new Date(), MESSAGE: JSON.stringify(req.body), SOURCE: req.ip };
connection.query('INSERT INTO EVENTS SET ?', values, function(error, results, fields) {
if (error) console.log(error);
else console.log("inserted successfully");
});
connection.end();
res.status(200).send('Success: ' + req.body.message);
}
};
// package.json
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"mysql": "^2.13.0"
}
}
rs...@google.com <rs...@google.com> #44
This is not an official announcement (stay tuned), but as Pietro noticed access to Cloud SQL started to work recently. We are still performing tests, but you should be able to experiment with the feature by specifying:
socketPath: '/cloudsql/' + CloudSQLInstanceId (<project-id>:<region>:<name>)
in the createConnection method.
socketPath: '/cloudsql/' + CloudSQLInstanceId (<project-id>:<region>:<name>)
in the createConnection method.
ch...@gmail.com <ch...@gmail.com> #45
Great news!!!!
Checked and its working!
Keep us posted on the official release.
Thank you!
Checked and its working!
Keep us posted on the official release.
Thank you!
[Deleted User] <[Deleted User]> #46
+1
[Deleted User] <[Deleted User]> #47
+1
ad...@pauloddr.com <ad...@pauloddr.com> #48
Will it work with Cloud SQL Postgres instances too? It seems it's not currently working, either by IP or socketPath (via host).
[Deleted User] <[Deleted User]> #49
+1 for Postgres. Please support !!
pa...@gmail.com <pa...@gmail.com> #50
+1 for Postgres
[Deleted User] <[Deleted User]> #51
+1 for Postgres! PLEASE
ak...@gmail.com <ak...@gmail.com> #52
+1
rs...@google.com <rs...@google.com>
[Deleted User] <[Deleted User]> #53
Any chance we can get a rough idea of timing on the Postgres side now that this issue is assigned ?
Thanks,
Alex
Thanks,
Alex
ja...@google.com <ja...@google.com> #54
Postgres is coming... but just to be clear; we haven't officially announced this yet and it's not officially supported, so use at your peril!
ej...@gmail.com <ej...@gmail.com> #55
+1
[Deleted User] <[Deleted User]> #56
Will postgres be the same as mysql for connection config ? config.host: '/cloudsql/' + CloudSQLInstanceId ?
ja...@google.com <ja...@google.com> #57
I believe so yes. I reserve the right to be wrong of course, although I'm eventually consistent ;)
ve...@gmail.com <ve...@gmail.com> #58
+1 for Postgres
sa...@gmail.com <sa...@gmail.com> #59
+1 for Postgres support!
te...@gmail.com <te...@gmail.com> #60
+1
ch...@localcover.com <ch...@localcover.com> #61
+1
da...@gmail.com <da...@gmail.com> #62
+1 for postgres support !
ro...@eits.com.br <ro...@eits.com.br> #63
+1 pg support
pb...@gmail.com <pb...@gmail.com> #64
+1
gi...@gmail.com <gi...@gmail.com> #65
+1
pe...@gmail.com <pe...@gmail.com> #66
+1
ve...@gmail.com <ve...@gmail.com> #67
This is truly needed.
se...@gmail.com <se...@gmail.com> #68
I hope it can be supported officially soon! :P I need it for my project! Thank you!
pu...@gmail.com <pu...@gmail.com> #69
I need it for my project as well. Hope it gets officially supported soon.
ga...@hangenixsolutions.com <ga...@hangenixsolutions.com> #70
+1
[Deleted User] <[Deleted User]> #71
+1
lr...@g.hmc.edu <lr...@g.hmc.edu> #72
+1
dm...@gmail.com <dm...@gmail.com> #73
+1
ja...@yaku.to <ja...@yaku.to> #74
+1
sh...@gmail.com <sh...@gmail.com> #75
+1
dr...@pixtainc.com <dr...@pixtainc.com> #76
+1 for Postgres
ma...@gmail.com <ma...@gmail.com> #77
+1 for Postgres
eh...@gmail.com <eh...@gmail.com> #78
+1
br...@gmail.com <br...@gmail.com> #79
+1
pe...@axon.es <pe...@axon.es> #80
+1 for Postgres
ja...@gmail.com <ja...@gmail.com> #81
+1
kh...@gmail.com <kh...@gmail.com> #82
+1
br...@regiscope.com <br...@regiscope.com> #83
+1
ga...@eforall.org <ga...@eforall.org> #84
C'mon, Google. This is such an important feature. Can you please give us a ballpark estimate of when it will be available so we can plan appropriately?
Is it one month or two years?
Is it one month or two years?
ca...@gmail.com <ca...@gmail.com> #85
+1 for pg, one thing that make companies great for developers is that they follow a reasonable logic, pls fix asap this is something everyone expect to be there
rv...@reso.no <rv...@reso.no> #86
+1
[Deleted User] <[Deleted User]> #87
+1
[Deleted User] <[Deleted User]> #88
+1
rv...@gmail.com <rv...@gmail.com> #89
after experimenting with
socketPath: '/cloudsql/' + CloudSQLInstanceId (<project-id>:<region>:<name>) (see prev post)
I run into connection errors when the cloudfunction is called with higher volume.
We need somthing like connection-sharing or connection-reuse otherwise this is not going to fly.
socketPath: '/cloudsql/' + CloudSQLInstanceId (<project-id>:<region>:<name>) (see prev post)
I run into connection errors when the cloudfunction is called with higher volume.
We need somthing like connection-sharing or connection-reuse otherwise this is not going to fly.
ja...@google.com <ja...@google.com> #90
Hi Ronald,
How are you creating the connection? More specifically.. *where* are you creating the connection? (I.e. where in the code)
How are you creating the connection? More specifically.. *where* are you creating the connection? (I.e. where in the code)
co...@gmail.com <co...@gmail.com> #91
+1 for PostgreSQL. ;-)
rv...@gmail.com <rv...@gmail.com> #92
I connect inside the cloudfunction call, but your question hints me as that I can also connect globally and use this connection. Thanks, I'll try this.
ig...@google.com <ig...@google.com> #93
Hi Ronald,
A global connection would be better and will likely solve your problem.
A global connection would be better and will likely solve your problem.
ja...@google.com <ja...@google.com> #94
Also double-check that you're closing the connection safely. Some of the examples I've seen might be treating the "end()" call as synchronous, where in-fact it's async so you should make sure you return/callback from the function _after_ end completes.
Eg. assuming you're using this...
https://github.com/mysqljs/mysql#terminating-connections
Then this...
connection.end(function(err) {
callback(err);
});
Eg. assuming you're using this...
Then this...
connection.end(function(err) {
callback(err);
});
ja...@google.com <ja...@google.com> #95
or even.. connection.end(callback)
rv...@gmail.com <rv...@gmail.com> #96
could you post a more complete example of a best-practices way to do use a mysql socket in the cloudfunction? This would help a lot!
for instance: is it better to open/close a connection in the handler function or open a global mysql connection and never close it? How many connections can I expect when using a global mysql connection? Is that the max number of started instances for the cloudfunction? The mysql-socket seems to work but some guidance as how to use them would be useful
for instance: is it better to open/close a connection in the handler function or open a global mysql connection and never close it? How many connections can I expect when using a global mysql connection? Is that the max number of started instances for the cloudfunction? The mysql-socket seems to work but some guidance as how to use them would be useful
ig...@google.com <ig...@google.com> #97
The following applies to using relational databases in general. It is not specific to Google products.
There is a cost for establishing a connection to a database. Opening a connection per-request is bad practice. Connections should be reused when possible. Even worse, if you do one connection per request and don't close them, you will leak connections and eventually run into some sort of database connection limit.
If your code will only service one request at a time, you should use a single shared connection. As long as all of your transactions complete successfully, you don't need to worry about flushing/closing the connection as everything is committed and the connection will be closed when the process exits.
If you are going to service multiple requests simultaneously, you should use a shared connection pool. You probably don't need to worry about closing the pool, but it is important to return connections to the pool when you are done with them to avoid leaking connections.
There is a cost for establishing a connection to a database. Opening a connection per-request is bad practice. Connections should be reused when possible. Even worse, if you do one connection per request and don't close them, you will leak connections and eventually run into some sort of database connection limit.
If your code will only service one request at a time, you should use a single shared connection. As long as all of your transactions complete successfully, you don't need to worry about flushing/closing the connection as everything is committed and the connection will be closed when the process exits.
If you are going to service multiple requests simultaneously, you should use a shared connection pool. You probably don't need to worry about closing the pool, but it is important to return connections to the pool when you are done with them to avoid leaking connections.
ja...@google.com <ja...@google.com> #98
My post and Ian's might be confusing.. let me try to clarify.
Creating the connection in global scope is preferred, because it will be re-used if subsequent requests hit the same instance. In that case you shouldn't need to worry about calling "end()".
If you do use global scope, make sure than you wait for any operations to complete before returning. For example:
pool.query('SELECT 1 + 1 AS solution', function (error, results, fields) {
if (error) {
callback(error);
return;
}
callback(null, results);
});
Try to avoid doing this...
var post = {id: 1, title: 'Hello MySQL'};
var query = connection.query('INSERT INTO posts SET ?', post, function (error, results, fields) {
if (error) throw error;
// Neat!
});
callback();
The callback here might be called before the INSERT has actually completed.
If you're sticking with having a per-request connection, make sure you close it before you exit using end() and return from the function in the callback for end() (don't just fire-and-forget the call to end() and return immediately after)
Creating the connection in global scope is preferred, because it will be re-used if subsequent requests hit the same instance. In that case you shouldn't need to worry about calling "end()".
If you do use global scope, make sure than you wait for any operations to complete before returning. For example:
pool.query('SELECT 1 + 1 AS solution', function (error, results, fields) {
if (error) {
callback(error);
return;
}
callback(null, results);
});
Try to avoid doing this...
var post = {id: 1, title: 'Hello MySQL'};
var query = connection.query('INSERT INTO posts SET ?', post, function (error, results, fields) {
if (error) throw error;
// Neat!
});
callback();
The callback here might be called before the INSERT has actually completed.
If you're sticking with having a per-request connection, make sure you close it before you exit using end() and return from the function in the callback for end() (don't just fire-and-forget the call to end() and return immediately after)
eh...@gmail.com <eh...@gmail.com> #99
I've tried the socketPath connection, it seemed that it is not reliable sometimes on small number connections as well for me. I had experiencing connection errors spuriously. I had to switch to IP connection and allowing IP with 0.0.0.0/0 to come into MySQL instance. This is really unsafe but I had no choice. Here is my code via the NPM mysql package:
function _executeSql(sql) {
return new Promise(function(resolve, reject) {
// Get a connection from the pool
pool.getConnection((error, connection) => {
if(error) {
console.error(`connection error = `, error);
reject(error);
} else {
connection.query(sql, (error, results, fields) => {
// And done with the connection.
connection.release();
// Handle error after the release.
if (error) {
console.error("query error = ", error);
reject(error);
} else {
resolve(results);
}
// Don't use the connection here, it has been returned to the pool.
});
}
});
});
}
function _executeSql(sql) {
return new Promise(function(resolve, reject) {
// Get a connection from the pool
pool.getConnection((error, connection) => {
if(error) {
console.error(`connection error = `, error);
reject(error);
} else {
connection.query(sql, (error, results, fields) => {
// And done with the connection.
connection.release();
// Handle error after the release.
if (error) {
console.error("query error = ", error);
reject(error);
} else {
resolve(results);
}
// Don't use the connection here, it has been returned to the pool.
});
}
});
});
}
al...@gmail.com <al...@gmail.com> #100
hi All,
does "socketPath: '/cloudsql/' + CloudSQLInstanceId (<project-id>:<region>:<name>)" works on any type of Firebase pricing package?
does "socketPath: '/cloudsql/' + CloudSQLInstanceId (<project-id>:<region>:<name>)" works on any type of Firebase pricing package?
al...@gmail.com <al...@gmail.com> #101
I tried many things, but still get ECONNREFUSED
Should I enable some special Authorization to make Cloud SQL Mysql accessible for Firebase Cloud functions?
Error: connect ECONNREFUSED /cloudsql/silent-thunder-177604:us-central1:alextest2
at Object.exports._errnoException (util.js:1018:11)
at exports._exceptionWithHostPort (util.js:1041:20)
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1086:14)
--------------------
at Protocol._enqueue (/user_code/node_modules/mysql/lib/protocol/Protocol.js:145:48)
at Protocol.handshake (/user_code/node_modules/mysql/lib/protocol/Protocol.js:52:23)
at Connection.connect (/user_code/node_modules/mysql/lib/Connection.js:130:18)
at exports.getData.functions.https.onRequest (/user_code/index.js:28:11)
at cloudFunction (/user_code/node_modules/firebase-functions/lib/providers/https.js:26:47)
at /var/tmp/worker/worker.js:635:7
at /var/tmp/worker/worker.js:619:9
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickDomainCallback (internal/process/next_tick.js:128:9)
Should I enable some special Authorization to make Cloud SQL Mysql accessible for Firebase Cloud functions?
Error: connect ECONNREFUSED /cloudsql/silent-thunder-177604:us-central1:alextest2
at Object.exports._errnoException (util.js:1018:11)
at exports._exceptionWithHostPort (util.js:1041:20)
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1086:14)
--------------------
at Protocol._enqueue (/user_code/node_modules/mysql/lib/protocol/Protocol.js:145:48)
at Protocol.handshake (/user_code/node_modules/mysql/lib/protocol/Protocol.js:52:23)
at Connection.connect (/user_code/node_modules/mysql/lib/Connection.js:130:18)
at exports.getData.functions.https.onRequest (/user_code/index.js:28:11)
at cloudFunction (/user_code/node_modules/firebase-functions/lib/providers/https.js:26:47)
at /var/tmp/worker/worker.js:635:7
at /var/tmp/worker/worker.js:619:9
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickDomainCallback (internal/process/next_tick.js:128:9)
ja...@gmail.com <ja...@gmail.com> #102
+1 Cloud functions are useless until this gets added.
ja...@gmail.com <ja...@gmail.com> #103
+1 AWS Lambda is the most pragmatic (and secure) solution for now if you want to combine an SQL database with a serverless architecture.
ni...@gmail.com <ni...@gmail.com> #104
+1 I'd love to see this implemented, it'd literally change everything about how I do backend development. (for the much much better).
al...@appliedmetaphysics.com <al...@appliedmetaphysics.com> #105
Went with GAE myself, but would really like to see this one implemented for postgresql. Since its been several weeks since things unofficially started working on mysql, can we expect anything relatively soon on the postgres side ?
[fingers crossed]
[fingers crossed]
fe...@gmail.com <fe...@gmail.com> #106
+1 for Postgres with Cloud Functions!!
le...@gmail.com <le...@gmail.com> #107
+1
an...@gmail.com <an...@gmail.com> #108
+1
[Deleted User] <[Deleted User]> #109
+1
ra...@gmail.com <ra...@gmail.com> #110
+1
aa...@lifeiscontent.net <aa...@lifeiscontent.net> #111
any updates here team?
we...@guidingstartechnologies.com <we...@guidingstartechnologies.com> #112
+1
te...@gmail.com <te...@gmail.com> #113
+1 any news?
oo...@gmail.com <oo...@gmail.com> #114
Glad you ask. This week Google introduced a new document-based NoSQL database inside Firebase called Cloud Firestore that works with CF https://firebase.googleblog.com/2017/10/introducing-cloud-firestore.html
ga...@eforall.org <ga...@eforall.org> #115
Any news on connecting CF to CloudSQL?
[Deleted User] <[Deleted User]> #116
+1
va...@gmail.com <va...@gmail.com> #117
+1
[Deleted User] <[Deleted User]> #118
+1 Postgres please :)
[Deleted User] <[Deleted User]> #119
+1
je...@gmail.com <je...@gmail.com> #120
Guys.... If you don't have any other constructive information to contribute
with please just star the issue. You don't have to make a +1 post. Google
will know from the amount of stars that you think this is important. You
spam all of us with your +1 messages.
And Google please make a filter that blocks +1 messages as they are really
annoying and only make me wanna unstar this issue....
On Oct 12, 2017 17:52, <buganizer-system@google.com> wrote:
with please just star the issue. You don't have to make a +1 post. Google
will know from the amount of stars that you think this is important. You
spam all of us with your +1 messages.
And Google please make a filter that blocks +1 messages as they are really
annoying and only make me wanna unstar this issue....
On Oct 12, 2017 17:52, <buganizer-system@google.com> wrote:
ro...@undeadindustries.com <ro...@undeadindustries.com> #121
Any updates on MySQL and PSQL being officially supported?
It's been 15 or 16 months since the "unofficial" announcement.
It's been 15 or 16 months since the "unofficial" announcement.
ja...@google.com <ja...@google.com> #122
Hi folks,
Really sorry for the delay here. For those interested in accessing Cloud SQL (MySQL & PostgreSQL) from Cloud Functions, please sign up here:
https://goo.gl/forms/CHoiHwsBWMx5GZ3w1
We'll be sending out guidance to those who sign up very soon
Really sorry for the delay here. For those interested in accessing Cloud SQL (MySQL & PostgreSQL) from Cloud Functions, please sign up here:
We'll be sending out guidance to those who sign up very soon
ca...@gmail.com <ca...@gmail.com> #123
Any news about the guidance?
Thanks
Thanks
ig...@google.com <ig...@google.com> #124
An initial round of people who signed up have been added to the early access program.
pa...@gmail.com <pa...@gmail.com> #125
Hi,
I registered and I hope you send the guidance as soon as possible.
Thanks for your efforts.
I registered and I hope you send the guidance as soon as possible.
Thanks for your efforts.
ja...@google.com <ja...@google.com> #126
I just sent out a new batch of invites. Post back if you expected one and didn't get it
aa...@lifeiscontent.net <aa...@lifeiscontent.net> #127
hey, can I get an invite?
ca...@gmail.com <ca...@gmail.com> #128
I didn't receive any invite.
Thanks
Fabio
Thanks
Fabio
si...@gmail.com <si...@gmail.com> #129
We are begining to look into CloudSQL and this will be a core requirement - signed up for the guidance above.
ja...@google.com <ja...@google.com> #130
New batch of updates sent :)
jc...@sertech.mx <jc...@sertech.mx> #131
we have a existing firebase/firestore project. but a new feature would require postgres geographical capabilities, being able to use cloud functions would save me from a much painfull redo in express on compute engine. tnx in advance
[Deleted User] <[Deleted User]> #132
Very interested as well. We are going to be pushing to Postgres.
yo...@gmail.com <yo...@gmail.com> #133
+1
dc...@gmail.com <dc...@gmail.com> #134
+1
[Deleted User] <[Deleted User]> #135
+1
ig...@google.com <ig...@google.com> #136
For all of you replying +1, what is it that you are +1ing?
is...@humbike.com <is...@humbike.com> #137
^ Cloud Functions support for Cloud SQL (w Postgres)
ig...@google.com <ig...@google.com> #138
It is available now. Sign up using the form in #122.
ku...@gmail.com <ku...@gmail.com> #139
Could you please provide access to beta #122 for the new batch of users waiting for acceptance from 2 weeks.
Thanks for your help.
Thanks for your help.
zi...@gmail.com <zi...@gmail.com> #140
Kinda assumed that this would work out of the box, luckily it's being tested already. If there are more testers needed, let met know would love to help!
Regards,
Zino Hofmann
Chief Innovation Officer
Regards,
Zino Hofmann
Chief Innovation Officer
ga...@google.com <ga...@google.com> #141
User Maree Beare (maree.beare@hautla.com) has just submitted a form but did not receive any invite.
hi...@gmail.com <hi...@gmail.com> #142
Hi, please include me as well in the invites you send out next. I would like to connect to CloudSQL (Mysql or Postgres either would work) via cloud functions.
[Deleted User] <[Deleted User]> #143
Hi all
does the experimental mentioned in post #44 feature still work or am i doing something wrong?
var CloudSQLInstanceId = 'project_id:proj_region:proj_name';
var connection = mysql.createConnection({
// host: 'HOST_IP',
user: 'root',
password: 'PASSWORD',
database: 'DATABASE_NAME',
socketPath: '/cloudsql/' + CloudSQLInstanceId
});
connection.connect(function(err) {
if (err) {
// console.error('error connecting: ' + err.stack);
response.json({err: err.stack});
return;
}
the above code keeps on throwing Error: connect ENOENT /cloudsql/project_id:proj_region:proj_name
UPDATE*
The did not clarify. the above works once the Cloud Function has been deployed
it does not work which trying to test LOCALLY on Cloud Functions Local Emulator
UPDATE*
To test Cloud Functions locally using the Local Emulator. use the cloud sql proxy and set the port to whatever your test environment is running on
Regards
Ameer
does the experimental mentioned in post #44 feature still work or am i doing something wrong?
var CloudSQLInstanceId = 'project_id:proj_region:proj_name';
var connection = mysql.createConnection({
// host: 'HOST_IP',
user: 'root',
password: 'PASSWORD',
database: 'DATABASE_NAME',
socketPath: '/cloudsql/' + CloudSQLInstanceId
});
connection.connect(function(err) {
if (err) {
// console.error('error connecting: ' + err.stack);
response.json({err: err.stack});
return;
}
the above code keeps on throwing Error: connect ENOENT /cloudsql/project_id:proj_region:proj_name
UPDATE*
The did not clarify. the above works once the Cloud Function has been deployed
it does not work which trying to test LOCALLY on Cloud Functions Local Emulator
UPDATE*
To test Cloud Functions locally using the Local Emulator. use the cloud sql proxy and set the port to whatever your test environment is running on
Regards
Ameer
[Deleted User] <[Deleted User]> #144
For anyone stumbling upon this issue, I can confirm that using the socket method works with node-postgres client (pg on npm) for postgreSQL.
The trick is to set the 'host' not 'socketPath' (hosts starting with / are sockets for pg:https://www.postgresql.org/docs/9.1/static/libpq-connect.html#LIBPQ-CONNECT-HOST ).
Another tip, you can copy the long socket path directly from the google cloud console->SQL->Instance page. In the box titled "Connect to this instance", click to copy the "Instance connection name". Then prepend it with "/cloudsql/" for the host connection parameter.
The trick is to set the 'host' not 'socketPath' (hosts starting with / are sockets for pg:
Another tip, you can copy the long socket path directly from the google cloud console->SQL->Instance page. In the box titled "Connect to this instance", click to copy the "Instance connection name". Then prepend it with "/cloudsql/" for the host connection parameter.
[Deleted User] <[Deleted User]> #145
Hi, is the early access program still accessible ? I want to use Cloud SQL (postgreSQL) in my cloud functions for the project I'm working on.
Thank you in advance.
Thank you in advance.
[Deleted User] <[Deleted User]> #146
+1, I want to use Cloud SQL function for MySQL
ni...@egym.com <ni...@egym.com> #147
+1
fr...@gmail.com <fr...@gmail.com> #148
+1
ru...@gmail.com <ru...@gmail.com> #149
+1
ca...@gmail.com <ca...@gmail.com> #150
+1
ed...@kpeyes.io <ed...@kpeyes.io> #151
+1
jh...@6river.com <jh...@6river.com> #152
+1
[Deleted User] <[Deleted User]> #153
+1
[Deleted User] <[Deleted User]> #154
+1
[Deleted User] <[Deleted User]> #155
+1
la...@lemonde.fr <la...@lemonde.fr> #156
+1
[Deleted User] <[Deleted User]> #157
Instead of +1ing you should star the issue at the top left.
Any update on this from Google as to when we should expect access to Cloud SQL?
My use case is to write a small script that will run once a day to purge old logs in the SQL database. Cloud Functions also needs time based trigger functionality for this to work.
Any update on this from Google as to when we should expect access to Cloud SQL?
My use case is to write a small script that will run once a day to purge old logs in the SQL database. Cloud Functions also needs time based trigger functionality for this to work.
ja...@google.com <ja...@google.com> #158
Cloud SQL sign up: https://docs.google.com/forms/d/1Yhs7Jz5d16MglHO_ucJB2hgI_wKEAmYrYQd0xSkHVys/edit (#122)
Scheduling is coming, for now you can wire up an App Engine cron to Pubsub and trigger I function that way. Yes I know it's ugly
Scheduling is coming, for now you can wire up an App Engine cron to Pubsub and trigger I function that way. Yes I know it's ugly
[Deleted User] <[Deleted User]> #159
I might as well just write the code in App Engine as well then.
ja...@google.com <ja...@google.com> #160
Yep.. that also works.
pa...@gmail.com <pa...@gmail.com> #161
+1 would love some sort of official support for communicating with my cloud postgres instance via a cloud function
pa...@remoteorigin.co <pa...@remoteorigin.co> #162
+1
de...@valendesigns.com <de...@valendesigns.com> #163
+1
ca...@gmail.com <ca...@gmail.com> #164
More than one year and still no official support? I started an application last year, I saw this and I said: when my APP gets production surely this issue will be resolved, my APP is in production now, I don't have official support and Cloud Functions are randomly getting ECONNREFUSED from MySQL instance, I'm paying for first class sql service that doesn't feel like that
I know you never told me: "it's official, that works" but guys, one year with this?
UPDATE:
Support contacted and helped us to fix the issue, we had lot of functions that weren't using pool connections, once we implemented the connection pool on all of our cloud functions, the ECONNREFUSED error has gone, 7 days without any error since the update, thanks.
Use global variables to reuse objects in future invocations
https://cloud.google.com/functions/docs/bestpractices/tips#use_global_variables_to_reuse_objects_in_future_invocations
I know you never told me: "it's official, that works" but guys, one year with this?
UPDATE:
Support contacted and helped us to fix the issue, we had lot of functions that weren't using pool connections, once we implemented the connection pool on all of our cloud functions, the ECONNREFUSED error has gone, 7 days without any error since the update, thanks.
Use global variables to reuse objects in future invocations
ma...@gmail.com <ma...@gmail.com> #165
+1
th...@gmail.com <th...@gmail.com> #166
lol +1
sa...@gmail.com <sa...@gmail.com> #167
+1
ra...@softwarechain.io <ra...@softwarechain.io> #168
+1
an...@gmail.com <an...@gmail.com> #169
+1
va...@gmail.com <va...@gmail.com> #170
May i know how to connect the Google functions to Cloud SQl mysql database .please will you suggest me any sample code for connecting functions to mysql.
[Deleted User] <[Deleted User]> #171
Because Firebase Functions is a modified NodeJS server, you're going to need to:
- Install any NodeJS MySQL library and learn how to use it.
- Then, you should add `0.0.0.0/0` to Authorization.
Here's one of the MySQL libraries:
https://www.npmjs.com/package/mysql
- Install any NodeJS MySQL library and learn how to use it.
- Then, you should add `
Here's one of the MySQL libraries:
[Deleted User] <[Deleted User]> #173
IMO: Every MySQL database can connect from any external IP address.
However, there does not seem to be a VPN connection... (maybe not, yet)
If you are using Compute Engine however, you can lookup the range of IP address using this method.
https://cloud.google.com/compute/docs/faq#where_can_i_find_product_name_short_ip_ranges
I'm not sure if it is applicable with Google functions.
However, there does not seem to be a VPN connection... (maybe not, yet)
If you are using Compute Engine however, you can lookup the range of IP address using this method.
I'm not sure if it is applicable with Google functions.
sa...@gmail.com <sa...@gmail.com> #174
+100000
ko...@lixil.com <ko...@lixil.com> #175
+1
[Deleted User] <[Deleted User]> #176
+1
re...@gmail.com <re...@gmail.com> #177
+1
ti...@icloud.com <ti...@icloud.com> #178
+1
na...@gmail.com <na...@gmail.com> #179
+1
fr...@gmail.com <fr...@gmail.com> #180
+1 postgres
ga...@gmail.com <ga...@gmail.com> #181
+1
oo...@gmail.com <oo...@gmail.com> #183
[Deleted User] <[Deleted User]> #184
Does somebody know how to connect from Python?
ja...@google.com <ja...@google.com> #185
Depends if you're using MySQL or Postgres. We'll have new docs up soon for Python, but here's a quick example (untested):
MySQL:
------------
from os import getenv
import pymysql
mysql_config = {
'user': getenv('SQL_USER'),
'password': getenv('SQL_PASSWORD'),
'db': getenv('SQL_DATABASE'),
'charset': 'utf8mb4',
'cursorclass': pymysql.cursors.DictCursor,
'autocommit': True,
'unix_socket': '/cloudsql/' + getenv('INSTANCE_CONNECTION_NAME')
}
mysql_connection = pymysql.connect(**mysql_config)
Postgres:
------------
from os import getenv
import psycopg2
pg_config = {
'user': getenv('SQL_USER'),
'password': getenv('SQL_PASSWORD'),
'dbname': getenv('SQL_DATABASE'),
'host': '/cloudsql/' + getenv('INSTANCE_CONNECTION_NAME')
}
pg_connection = psycopg2.connect(**pg_config)
in both cases, INSTANCE_CONNECTION_NAME should follow the form discussed in the docs:https://cloud.google.com/functions/docs/sql (look for "INSTANCE_CONNECTION_NAME")
These examples assume SQL_USER, SQL_PASSWORD and SQL_DATABASE are set as environment variables at function deployment time.
MySQL:
------------
from os import getenv
import pymysql
mysql_config = {
'user': getenv('SQL_USER'),
'password': getenv('SQL_PASSWORD'),
'db': getenv('SQL_DATABASE'),
'charset': 'utf8mb4',
'cursorclass': pymysql.cursors.DictCursor,
'autocommit': True,
'unix_socket': '/cloudsql/' + getenv('INSTANCE_CONNECTION_NAME')
}
mysql_connection = pymysql.connect(**mysql_config)
Postgres:
------------
from os import getenv
import psycopg2
pg_config = {
'user': getenv('SQL_USER'),
'password': getenv('SQL_PASSWORD'),
'dbname': getenv('SQL_DATABASE'),
'host': '/cloudsql/' + getenv('INSTANCE_CONNECTION_NAME')
}
pg_connection = psycopg2.connect(**pg_config)
in both cases, INSTANCE_CONNECTION_NAME should follow the form discussed in the docs:
These examples assume SQL_USER, SQL_PASSWORD and SQL_DATABASE are set as environment variables at function deployment time.
ma...@versionx.in <ma...@versionx.in> #186
Trying to connect other project Cloud SQL MySQL 2nd gen instance.
For Example:
------------------
From project1 cloud functions trying to connect project2 Cloud SQL MySQL 2nd gen Instance
Cloud function:
-------------------
let connection = mysql.createConnection({
host: 'xx.xxx.xx.xxx', //Primary IP address
user: 'userName',
password: 'password'
database: 'databaseName'
});
connection.query('SELECT * from Table;', function (err, rows, fields) {
if (err) {
console.log(err);
} else {
console.log(rows);
connection.end();
}
});
ERROR:
-----------
Error: connect ETIMEDOUT
at Connection._handleConnectTimeout (/user_code/node_modules/mysql/lib/Connection.js:411:13)
at Socket.g (events.js:292:16)
at emitNone (events.js:86:13)
at Socket.emit (events.js:185:7)
at Socket._onTimeout (net.js:348:8)
at ontimeout (timers.js:386:11)
at tryOnTimeout (timers.js:250:5)
at Timer.listOnTimeout (timers.js:214:5)
--------------------
at Protocol._enqueue (/user_code/node_modules/mysql/lib/protocol/Protocol.js:144:48)
at Protocol.handshake (/user_code/node_modules/mysql/lib/protocol/Protocol.js:51:23)
at Connection.connect (/user_code/node_modules/mysql/lib/Connection.js:118:18)
at cors (/user_code/modules/authUser.js:219:18)
at cors (/user_code/node_modules/cors/lib/index.js:188:7)
at /user_code/node_modules/cors/lib/index.js:224:17
at originCallback (/user_code/node_modules/cors/lib/index.js:214:15)
at /user_code/node_modules/cors/lib/index.js:219:13
at optionsCallback (/user_code/node_modules/cors/lib/index.js:199:9)
at corsMiddleware (/user_code/node_modules/cors/lib/index.js:204:7)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true
For Example:
------------------
From project1 cloud functions trying to connect project2 Cloud SQL MySQL 2nd gen Instance
Cloud function:
-------------------
let connection = mysql.createConnection({
host: 'xx.xxx.xx.xxx', //Primary IP address
user: 'userName',
password: 'password'
database: 'databaseName'
});
connection.query('SELECT * from Table;', function (err, rows, fields) {
if (err) {
console.log(err);
} else {
console.log(rows);
connection.end();
}
});
ERROR:
-----------
Error: connect ETIMEDOUT
at Connection._handleConnectTimeout (/user_code/node_modules/mysql/lib/Connection.js:411:13)
at Socket.g (events.js:292:16)
at emitNone (events.js:86:13)
at Socket.emit (events.js:185:7)
at Socket._onTimeout (net.js:348:8)
at ontimeout (timers.js:386:11)
at tryOnTimeout (timers.js:250:5)
at Timer.listOnTimeout (timers.js:214:5)
--------------------
at Protocol._enqueue (/user_code/node_modules/mysql/lib/protocol/Protocol.js:144:48)
at Protocol.handshake (/user_code/node_modules/mysql/lib/protocol/Protocol.js:51:23)
at Connection.connect (/user_code/node_modules/mysql/lib/Connection.js:118:18)
at cors (/user_code/modules/authUser.js:219:18)
at cors (/user_code/node_modules/cors/lib/index.js:188:7)
at /user_code/node_modules/cors/lib/index.js:224:17
at originCallback (/user_code/node_modules/cors/lib/index.js:214:15)
at /user_code/node_modules/cors/lib/index.js:219:13
at optionsCallback (/user_code/node_modules/cors/lib/index.js:199:9)
at corsMiddleware (/user_code/node_modules/cors/lib/index.js:204:7)
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true
ig...@google.com <ig...@google.com> #187
The documentation (https://cloud.google.com/functions/docs/sql ) suggests using the Unix socket interface (/cloudsql) instead of a public IP address. In order to access the other project's database via the Unix socket interface, you need to grant IAM permissions (https://cloud.google.com/functions/docs/sql#cross_project ).
au...@poeticdata.com <au...@poeticdata.com> #188
I have both a MySQL instance and a PostgreSQL instance allocated under the same project as my Firebase project. I'm using the blaze plan with the same billing for both.
I've attempted to connect to a cloud function as described here using both pg and mysql clients:
https://cloud.google.com/functions/docs/sql
Neither works. I received the following error every time:
{"code":"ECONNREFUSED","errno":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":3306,"fatal":true}
UPDATE: RESOLVED
I've attempted to connect to a cloud function as described here using both pg and mysql clients:
Neither works. I received the following error every time:
{"code":"ECONNREFUSED","errno":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":3306,"fatal":true}
UPDATE: RESOLVED
ig...@google.com <ig...@google.com> #189
You need to use /cloudsql/INSTANCE_CONNECTION_NAME instead of 127.0.0.1 as the address.
au...@poeticdata.com <au...@poeticdata.com> #190
I was but the environment variable had typo.
et...@gmail.com <et...@gmail.com> #191
hi...@gmail.com <hi...@gmail.com> #192
+1
cr...@crius.com.br <cr...@crius.com.br> #193
+1
ig...@google.com <ig...@google.com> #194
If you are still having problems with Cloud SQL, please create a new issue.
[Deleted User] <[Deleted User]> #195
+1
fa...@gmail.com <fa...@gmail.com> #196
+1
[Deleted User] <[Deleted User]> #197
Does anyone know how to connect to Cloud SQL with pg-promise? I am using tasks to perform three different queries; one of which is a multi-row insert
[Deleted User] <[Deleted User]> #198
Traceback (most recent call last): File "/user_code/main.py", line 58, in postgres_demo __connect(f'/cloudsql/{CONNECTION_NAME}/.s.PGSQL.5432') File "/user_code/main.py", line 47, in __connect pg_pool = SimpleConnectionPool(1, 1, **pg_config) File "/env/local/lib/python3.7/site-packages/psycopg2/pool.py", line 58, in __init__ self._connect() File "/env/local/lib/python3.7/site-packages/psycopg2/pool.py", line 62, in _connect conn = psycopg2.connect(*self._args, **self._kwargs) File "/env/local/lib/python3.7/site-packages/psycopg2/__init__.py", line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Not a directory Is the server running locally and accepting connections on Unix domain socket "/cloudsql/cei-data-science:us-east1:new-db-3/.s.PGSQL.5432/.s.PGSQL.5432"? During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 383, in run_background_function _function_handler.invoke_user_function(event_object) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 217, in invoke_user_function return call_user_function(request_or_event) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 214, in call_user_function event_context.Context(**request_or_event.context)) File "/user_code/main.py", line 78, in main postgres_demo(None) File "/user_code/main.py", line 61, in postgres_demo __connect('localhost') File "/user_code/main.py", line 47, in __connect pg_pool = SimpleConnectionPool(1, 1, **pg_config) File "/env/local/lib/python3.7/site-packages/psycopg2/pool.py", line 58, in __init__ self._connect() File "/env/local/lib/python3.7/site-packages/psycopg2/pool.py", line 62, in _connect conn = psycopg2.connect(*self._args, **self._kwargs) File "/env/local/lib/python3.7/site-packages/psycopg2/__init__.py", line 130, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432?
anyone know how to fix it ? I copied the code directly from gcp
anyone know how to fix it ? I copied the code directly from gcp
bo...@gmail.com <bo...@gmail.com> #199
+1
da...@anthonynolan.org <da...@anthonynolan.org> #200
+1
be...@gmail.com <be...@gmail.com> #201
+1
pa...@opsguru.io <pa...@opsguru.io> #202
+1
sa...@gmail.com <sa...@gmail.com> #203
+1
vo...@gmail.com <vo...@gmail.com> #204
Please!
da...@gmail.com <da...@gmail.com> #205
+1
an...@gmail.com <an...@gmail.com> #206
+1
de...@gmail.com <de...@gmail.com> #207
+1
ja...@google.com <ja...@google.com> #208
Unix domain socket "/cloudsql/cei-data-science:us-east1:new-db-3/.s.PGSQL.5432/.s.PGSQL.5432"?
That doesn't look right (duplicate ".s.PGSQL.5432"). Maybe our docs/sample is wrong
That doesn't look right (duplicate ".s.PGSQL.5432"). Maybe our docs/sample is wrong
[Deleted User] <[Deleted User]> #209
+1
ma...@cookiescalifornia.com <ma...@cookiescalifornia.com> #210
+1
[Deleted User] <[Deleted User]> #211
Trying to find methods to connect with postgres cloud sql where sql instance is on a different project and cloud function is different.
Tried methods with sql alchemy in documentation and psycopg2
does not seem to connect to the cloudsql instance on the different project
IAM rules from cloud function project are applied as per doc
Also the connection using cloudsql proxy works locally or on the GCP notebook
Tried methods with sql alchemy in documentation and psycopg2
does not seem to connect to the cloudsql instance on the different project
IAM rules from cloud function project are applied as per doc
Also the connection using cloudsql proxy works locally or on the GCP notebook
Description
Errors:
TCP: Error: connect ETIMEDOUT (to the IPv4 address of the SQL instance)
Socket: Error: connect ENOENT /cloudsql/grand-proton-161417:us-central1:everybag (Instance connection name)