=== kadams54-away is now known as kadams54 [01:48] any idea when the pure-python branch of lp:charms/python-django will land? [01:49] I'd like to consider a change in how python-django pip installs things. I'd like it to be able to pip install from local wheels. === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === jseutter_ is now known as jseutter === rsynnest_ is now known as rsynnest === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr === roadmr is now known as roadmr_afk === CyberJacob|Away is now known as CyberJacob === roadmr_afk is now known as roadmr === alexlist` is now known as alexlist === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr === underyx|off is now known as underyx [15:04] Hi guys. Is anyone free to answer a quick question about the nova-cloud-controller charm and the new neutron-api charm? [15:27] We have 30 minutes until the Juju Big Data UOS session. Please attend if you are interested in BIG DATA [15:27] http://summit.ubuntu.com/uos-1411/meeting/22392/big-data-and-juju/ [15:33] johnmce, hi, what's the question? [15:36] gnuoy: Hi. I'm upgrading my test cluster from Icehouse to Juno. HAd a few breakages along the way. Now deploying updated nova-cc charm and neutron-api. nova-cc charm fails to establish mysql relationship due to failed neutron db upgrade. [15:37] Was wondering if nova-cc should still be fiddling with the neutron db in the presence of neutron-api [15:38] johnmce, unfortunately, yes. the nova-cc charm is in charge of running db migrations for neutron for os >= Juno [15:39] johnmce, from a juju pov there is a relation between nova-cc and mysql but the neutron db migration was not run, is that right ? [15:39] gnuoy: OK, I don't suppose you would know how to work around for this failure. Command-line being run on nova-cc node is "/usr/bin/neutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" [15:40] gnuoy: Error is "sqlalchemy.exc.OperationalError: (OperationalError) (1050, "Table 'agents' already exists") '\nCREATE TABLE agents ......" [15:40] gnuoy: So, I seem to have at least a partial schema update [15:41] johnmce, is it definitely partial ? [15:41] I mean, are you sure the migration hasn't run through? [15:41] gnuoy: Well, I seem to have a table that it thinks I shouldn't have. [15:42] gnuoy: Presumably that table didn't exist pre-Juno, yet I seem to have it. [15:42] gnuoy: If I don't need a schema update (migration), then the logic that determines when an upgrade should be performed must be broken. [15:43] johnmce, that is possible. Do you see any other evidence that the neutron db is not in the state it should be? [15:43] errors from neutron-server etx [15:44] gnuoy: I'm not familiar with the db changes required for a Juno migration, so I wouldn't know what to look for. I've not check the neutron server logs. [15:45] johnmce, I'm just wondering if everything is rosy but, as you say, the logic around when to run the migration is broken [15:46] gnuoy: Things are pretty broken generally right now, so I'm just taking it one step at a time. Been fixing up other charms as I go. [15:47] gnuoy: Keystone, glance, horizon are all good, but nova_cc can't get past this [15:48] johnmce, when you say it can't get passed it do you mean a juju hook keeps erroring ? [15:49] gnuoy: I've destroyed the nova-cc service and re-deployed numerous times, so mayb the migration happened before. The juju hook for shaed-db always fails. "'hook failed: "shared-db-relation-changed" for percona-cluster:shared-db'" [15:51] johnmce, right, I see. Sounds like a charm bug that migrations fail if the schema is already present. [15:52] johnmce, I can work on trying to reproduce and getting a fix tomorrow. [15:52] johnmce, could you raise a bug report please ? === Spads_ is now known as Spads [15:54] gnuoy: I don't mind fixing it myself, if I can get a handle on what it's doing. Do you have any idea offhand what clues the charm looks for the decide a migration is needed, or does it just unconditionally call the neutron-db-manage script? [15:54] Greetings #juju - UOS 1114 - Big Data track is goign to start in about 6 minutes. If you'd like to participate - https://plus.google.com/hangouts/_/hoaevent/AP36tYfh0sDqCTgtXmsp4LRdu4lnwysNlJ0jMTS7tlh8HNWfgen-Tw?authuser=0&hl=en [15:54] johnmce, I think it blindly call it on the establishment of a relation with percona [15:54] gnuoy: Mybe the logic should be in neutron-db-manage? [15:55] johnmce, yes, I think that is true. I wonder if the nova db manage utility can be rerun without explosions [15:57] gnuoy: I'll try commenting that 'create table' bit out for now and see if I get to to progress to completion. Looks like it's got some broken logic somewhere though. [15:59] johnmce, I'm not sure I'd want to comment that 'create table' out tbh, you might get some unexpected results from neutron-db-manage [15:59] I'd be temted to mark the failed hook as resolved and see if things continue from there [16:00] gnuoy: I've already tried repeatedly to say it's resolved, but it always re-runs that script. You'd think there'd use an "if not exists" type option on table creation. [16:01] yeah, that's be good [16:01] s/that's/that'd/ [16:09] gnuoy: This goes well beyond a single table. It's trying to create every table from scratch, when they already exist. [16:12] gnuoy, dosaboy: did you have any thoughts on tuning down the log level across the openstack charms? [16:12] dosaboy, just thinking about it in the context of the ceph-broker work but we probably need to discuss more generally on what's the right level [16:13] dosaboy, I personally think our default level should be DEBUG with only end-user usefull info going at INFO level [16:16] jamespage: agreed [16:16] same as openstack [16:16] dosaboy, oo - do they have that documented somewhere? [16:16] hmm lemme dig [16:17] dosaboy, so a end use message would be "Configuring ceph storage pools with name XX, replicas XX" [16:17] and "Storage pools configured, starting cinder volume service" [16:17] maybe [16:17] not use [16:17] not sure [16:18] johnmce, I'm wondering if the stamp didn't get run [16:19] jamespage: *some* info here - http://docs.openstack.org/openstack-ops/content/logging_monitoring.html [16:19] lemme find better [16:19] https://wiki.openstack.org/wiki/LoggingStandards [16:20] neutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini stamp icehouse then upgrade head [16:20] jamespage: ^^ [16:21] johnmce, ok, I think I see the bug [16:22] can you try running: neutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini stamp icehouse and see if the migration then runs cleanly ? [16:23] gnuoy: Not read this properly yet, but it is related: http://www.gossamer-threads.com/lists/openstack/dev/42070 [16:25] gnuoy: Response to command: INFO [alembic.migration] Context impl MySQLImpl. INFO [alembic.migration] Will assume non-transactional DDL. [16:26] johnmce, yes, that thread looks to be the same issue [16:28] gnuoy: Seems to be working now! [16:28] gnuoy: thanks for your help [16:28] johnmce, np, I will fix that in the charms tomorrow [16:30] johnmce, sorry that you hit a bug. fwiw the bug is with line 522 of hooks/nova_cc_utils.py. that conditional shouldn't be there and was left over from when nova-cc and neutron-api where deciding between them on who should run the migration [16:41] gnuoy: OK, I see the problem now. Thanks for the info. I'll modify my copy. [16:46] working on aws, does anyone ever get a machine that reports itself as one public ip, but when you ssh in, ssh thinks it's a different public ip? [16:52] whit: i've seen this behavior before - whats weird was querying the metadata url caused the ip's to correct themselves. [16:53] lazyPower, tell me more about this metadate url? [16:53] whit: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html [16:53] lazyPower, danke [16:54] whit: no idea what caused the voodoo tbh - it was an isolated incident from another user that joined #juju ~ a week ago. [16:54] lazyPower, wonder if that force the route to get set? === kadams54 is now known as kadams54-away [16:55] well, a curl tot he metadata url shouldn't have any effect - thats all set during cloud init [16:55] I've seen it 3 times in the last week === kadams54-away is now known as kadams54 [16:55] dunno === kadams54 is now known as kadams54-away [16:59] whit: status will give the public IP, ssh will use the internal IP [16:59] whit: or at least that's the behaviour I'm seeing on AWS [16:59] probably using the bootstrap node as a proxy === scuttle|afk is now known as scuttlemonkey [17:34] jose: juju ssh does [17:34] all connectivity between the workstation and nodes proxies through the state server [17:34] there it is === kadams54-away is now known as kadams54 === rcj` is now known as rcj === jog_ is now known as jog === Spads_ is now known as Spads === Spads_ is now known as Spads [19:00] Juju Open feedback session is about to get started [19:00] https://plus.google.com/hangouts/_/hoaevent/AP36tYdwa3d9A4ohXuoRdM-SQCQSJUzu4xgEflvg996V8rMyAw427g?authuser=0&hl=en [19:01] if you'd like to join and add your view/comments/feedback we'd love to hear from you === kadams54 is now known as kadams54-away === Spads_ is now known as Spads [22:06] cmars, the one thing that comes to mind is that it might make sense to not run the collectic-metrics hook if a charm does not define any metrics === kadams54-away is now known as kadams54 === CyberJacob is now known as CyberJacob|Away === kadams54 is now known as kadams54-away