[00:19] <ploxiln> it looks like the EC2 AMI for hvm-ssd has not been generated in cn-north-one since 2018-11 - sorry to pop out of nowhere with a question, but where would be appropriate to report/ask about this?
[00:20] <ploxiln> s/cn-north-one/cn-north-1/
[00:21] <sarnold> jamespage: ^^ is ec2 hvm-ssd image problems on cn-north-one something you'd tend to?
[00:21] <ploxiln> well, correction: for xenial the last was 2018-11, for bionic the last was 2019-01. also it looks like non-hvm AMIs are still being generated regularly
[01:58] <ChmEarl> when my 19.04 server goes down with multiple ssh client windows, I don't get notifications and the shells stay open
[02:16] <ploxiln> ChmEarl: I don't have any definitive fixes, just some tips and tricks. 1) if the ssh session is frozen/stuck, type <enter>~.
[02:16] <ploxiln> (including the period)
[02:17] <ploxiln> 2) you could "pkill sshd" as your user on the server, to close all ssh sessions for your user before shutting down the server
[02:17] <ploxiln> 3) you could try "sudo shutdown -r +1"
[02:18] <sarnold> ChmEarl: you could also set ServerAliveInterval in your ssh clients, but the downside is the sessions will be much more likely to timeout if a router dies
[02:22] <ChmEarl> ploxiln, sarnold  thanks
[06:06] <lordievader> Good morning
[10:16] <jamespage> sahid: you might just want to pick the patch from https://review.opendev.org/#/c/663294/
[10:16] <jamespage> sahid: its still needed
[10:17] <jamespage> sarnold: fraid not
[10:18] <sahid> jamespage: i've kept it https://git.launchpad.net/~sahid-ferdjaoui/ubuntu/+source/neutron/tree/debian/patches/bug1826419.patch?id=6153732fec521d0a3d8044ad59a26acc6d05f083
[10:19] <jamespage> sahid: ah sorry I was confused by the diff of a diff
[10:19] <jamespage> sahid: yes that looks OK to me
[10:19] <sahid> ok cool perfect
[14:01] <kevindank> Hello, im about to migrate a website from shared hosting on bluehost to cloud hosting on Linode, the main goal is site load time improvement, does a dedicated cpu plan make a difference here?
[14:01] <kevindank> My plan was to get a ubuntu 19.04 server with 16gbs of ram, 320gb SSD, and 8 core cpu but theres another plan offering smaller ssd and more cores
[14:05] <leftyfb> kevindank: I would not recommend 19.04 for a server. Not unless you plan on upgrading it in 7 months. Stick with LTS.
[14:06] <kevindank> leftyfb: 18.04 lts?
[14:06] <leftyfb> kevindank: yes, that is the latest LTS
[14:06] <leftyfb> supported for 5 years from release
[14:06] <kevindank> okay
[14:08] <TJ-> kevindank: no, you don't need a dedicated CPU linode, the standard will do
[14:08] <kevindank> TJ-: Thanks
[14:09] <TJ-> kevindank: the 2-CPU/4GB is usually enough for most sites; I host several domains on one such that are relatively busy
[14:10] <jamespage> coreycb, sahid: do you think we should start stripping out python- binary packages this cycle? we dropped py2 support in the actual openstack projects, so we could work back down the dependency chain now
[14:11] <coreycb> jamespage: yeah i think so. just need to be careful with swift deps i think.
[14:13] <kevindank> TJ-: That's typically what I use.   I have a website thats on bluehost right now that is maxing out ram,cpu on the bluehost standard server so went with the highter ram plan of 16gb
[14:14] <TJ-> kevindank: what's eating the memory? database?
[14:15] <kevindank> Total CPUs : 16 / 16 Cores Total RAM : 31 GB Real Time Free RAM : 2 GB Real Time RAM Usage :
[14:16] <kevindank> Real Time Free RAM : 1 GB Real Time RAM Usage : 29 GB
[14:16] <kevindank> im pulling that uses wp server stats, im not sure if thats the ram/cpu of the shared server or our site on that shared server however
[14:17] <kevindank> DB is 78mb
[14:18] <kevindank> the site is a lawfirm site, theres a bunch of pages and a chat feature but it should not be using 29gbs by itself at all
[14:27] <TJ-> kevindank: is that in the Linode, or at the shared hosting?
[14:28] <TJ-> kevindank: I'd presume its the shared host and it is over-subscribed
[14:44] <coreycb> sahid: neutron 2:13.0.3-0ubuntu1 uploaded to the cosmic unapproved queue. thanks!
[14:46] <sahid> coreycb: thanks
[19:21] <sarnold> ploxiln: apparently other images are being updated, it's the streams interface that's giving trouble -- eg https://paste.ubuntu.com/p/PM7QMBNm9g/
[19:23] <sarnold> ploxiln: that was generated via aws --profile=china ec2 describe-images --region=cn-north-1 --owners "837727238323" --query 'Images[*] | sort_by(@, &CreationDate)'  --- I don't know enough aws to tell you how to create a 'china' profile like that, but I hope it's something you're familiar with or already done
[19:23] <ploxiln> thanks for looking into it. the one I'm hoping for is 16.04 xenial with type hvm-ssd - I did an explicit search for it and I think that oen is not being generated
[19:24] <ploxiln> so I did: aws ec2 describe-images --filters Name=name,Values="ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*" Name=owner-id,Values=837727238323 --query "Images[*].[CreationDate,ImageId,RootDeviceName]" --output text | sort
[19:24] <ploxiln> (with cn-north-1 region)
[19:25] <ploxiln> and got 2018-11-21T07:51:00.000Zami-013ead89472fc7464/dev/sda1 as latest
[19:26] <ploxiln> proper paste: https://paste.ubuntu.com/p/bNKnN3sj8T/
[19:29] <ploxiln> bionic does have a newer hvm-ssd image, but the previous is from February, and the one before that from August 2018, so I just get the impression that general china flakyness makes each upload a roll of the dice
[20:49] <Ussat> So I get a CALL......a Dr is having trouble with his linux system. Its running...wait for it.....Ubunti 14.04
[20:50] <Ussat> I almost laughed in the phone
[20:52] <compdoc> whats his phone number. I want to laugh at him for reals
[20:54] <sdeziel> Windows XP I would have laugh but I'm actually impressed a Dr is using Ubuntu :)
[20:56] <compdoc> tiz good stuff
[21:16] <Ussat> We are taking it, burning it in the deepest darkest corner of hell and making hime a VM under my controll
[21:29] <Ussat> sdeziel, it wasnt as much him "useing" it per se, his grad student set it up so he could run a few programs specific to his work
[21:29] <Ussat> but ya we are burning it down...hard