ploxiln | it looks like the EC2 AMI for hvm-ssd has not been generated in cn-north-one since 2018-11 - sorry to pop out of nowhere with a question, but where would be appropriate to report/ask about this? | 00:19 |
---|---|---|
ploxiln | s/cn-north-one/cn-north-1/ | 00:20 |
sarnold | jamespage: ^^ is ec2 hvm-ssd image problems on cn-north-one something you'd tend to? | 00:21 |
ploxiln | well, correction: for xenial the last was 2018-11, for bionic the last was 2019-01. also it looks like non-hvm AMIs are still being generated regularly | 00:21 |
ChmEarl | when my 19.04 server goes down with multiple ssh client windows, I don't get notifications and the shells stay open | 01:58 |
ploxiln | ChmEarl: I don't have any definitive fixes, just some tips and tricks. 1) if the ssh session is frozen/stuck, type <enter>~. | 02:16 |
ploxiln | (including the period) | 02:16 |
ploxiln | 2) you could "pkill sshd" as your user on the server, to close all ssh sessions for your user before shutting down the server | 02:17 |
ploxiln | 3) you could try "sudo shutdown -r +1" | 02:17 |
sarnold | ChmEarl: you could also set ServerAliveInterval in your ssh clients, but the downside is the sessions will be much more likely to timeout if a router dies | 02:18 |
ChmEarl | ploxiln, sarnold thanks | 02:22 |
=== cpaelzer__ is now known as cpaelzer | ||
lordievader | Good morning | 06:06 |
jamespage | sahid: you might just want to pick the patch from https://review.opendev.org/#/c/663294/ | 10:16 |
jamespage | sahid: its still needed | 10:16 |
jamespage | sarnold: fraid not | 10:17 |
sahid | jamespage: i've kept it https://git.launchpad.net/~sahid-ferdjaoui/ubuntu/+source/neutron/tree/debian/patches/bug1826419.patch?id=6153732fec521d0a3d8044ad59a26acc6d05f083 | 10:18 |
jamespage | sahid: ah sorry I was confused by the diff of a diff | 10:19 |
jamespage | sahid: yes that looks OK to me | 10:19 |
sahid | ok cool perfect | 10:19 |
=== cpaelzer__ is now known as cpaelzer | ||
kevindank | Hello, im about to migrate a website from shared hosting on bluehost to cloud hosting on Linode, the main goal is site load time improvement, does a dedicated cpu plan make a difference here? | 14:01 |
kevindank | My plan was to get a ubuntu 19.04 server with 16gbs of ram, 320gb SSD, and 8 core cpu but theres another plan offering smaller ssd and more cores | 14:01 |
leftyfb | kevindank: I would not recommend 19.04 for a server. Not unless you plan on upgrading it in 7 months. Stick with LTS. | 14:05 |
kevindank | leftyfb: 18.04 lts? | 14:06 |
leftyfb | kevindank: yes, that is the latest LTS | 14:06 |
leftyfb | supported for 5 years from release | 14:06 |
kevindank | okay | 14:06 |
TJ- | kevindank: no, you don't need a dedicated CPU linode, the standard will do | 14:08 |
kevindank | TJ-: Thanks | 14:08 |
TJ- | kevindank: the 2-CPU/4GB is usually enough for most sites; I host several domains on one such that are relatively busy | 14:09 |
jamespage | coreycb, sahid: do you think we should start stripping out python- binary packages this cycle? we dropped py2 support in the actual openstack projects, so we could work back down the dependency chain now | 14:10 |
coreycb | jamespage: yeah i think so. just need to be careful with swift deps i think. | 14:11 |
kevindank | TJ-: That's typically what I use. I have a website thats on bluehost right now that is maxing out ram,cpu on the bluehost standard server so went with the highter ram plan of 16gb | 14:13 |
TJ- | kevindank: what's eating the memory? database? | 14:14 |
kevindank | Total CPUs : 16 / 16 Cores Total RAM : 31 GB Real Time Free RAM : 2 GB Real Time RAM Usage : | 14:15 |
kevindank | Real Time Free RAM : 1 GB Real Time RAM Usage : 29 GB | 14:16 |
kevindank | im pulling that uses wp server stats, im not sure if thats the ram/cpu of the shared server or our site on that shared server however | 14:16 |
kevindank | DB is 78mb | 14:17 |
kevindank | the site is a lawfirm site, theres a bunch of pages and a chat feature but it should not be using 29gbs by itself at all | 14:18 |
TJ- | kevindank: is that in the Linode, or at the shared hosting? | 14:27 |
TJ- | kevindank: I'd presume its the shared host and it is over-subscribed | 14:28 |
coreycb | sahid: neutron 2:13.0.3-0ubuntu1 uploaded to the cosmic unapproved queue. thanks! | 14:44 |
sahid | coreycb: thanks | 14:46 |
=== lotuspsychje_ is now known as lotus|celerynuc | ||
=== RoyK^ is now known as RoyK | ||
sarnold | ploxiln: apparently other images are being updated, it's the streams interface that's giving trouble -- eg https://paste.ubuntu.com/p/PM7QMBNm9g/ | 19:21 |
sarnold | ploxiln: that was generated via aws --profile=china ec2 describe-images --region=cn-north-1 --owners "837727238323" --query 'Images[*] | sort_by(@, &CreationDate)' --- I don't know enough aws to tell you how to create a 'china' profile like that, but I hope it's something you're familiar with or already done | 19:23 |
ploxiln | thanks for looking into it. the one I'm hoping for is 16.04 xenial with type hvm-ssd - I did an explicit search for it and I think that oen is not being generated | 19:23 |
ploxiln | so I did: aws ec2 describe-images --filters Name=name,Values="ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*" Name=owner-id,Values=837727238323 --query "Images[*].[CreationDate,ImageId,RootDeviceName]" --output text | sort | 19:24 |
ploxiln | (with cn-north-1 region) | 19:24 |
ploxiln | and got 2018-11-21T07:51:00.000Zami-013ead89472fc7464/dev/sda1 as latest | 19:25 |
ploxiln | proper paste: https://paste.ubuntu.com/p/bNKnN3sj8T/ | 19:26 |
ploxiln | bionic does have a newer hvm-ssd image, but the previous is from February, and the one before that from August 2018, so I just get the impression that general china flakyness makes each upload a roll of the dice | 19:29 |
Ussat | So I get a CALL......a Dr is having trouble with his linux system. Its running...wait for it.....Ubunti 14.04 | 20:49 |
Ussat | I almost laughed in the phone | 20:50 |
compdoc | whats his phone number. I want to laugh at him for reals | 20:52 |
sdeziel | Windows XP I would have laugh but I'm actually impressed a Dr is using Ubuntu :) | 20:54 |
compdoc | tiz good stuff | 20:56 |
Ussat | We are taking it, burning it in the deepest darkest corner of hell and making hime a VM under my controll | 21:16 |
Ussat | sdeziel, it wasnt as much him "useing" it per se, his grad student set it up so he could run a few programs specific to his work | 21:29 |
Ussat | but ya we are burning it down...hard | 21:29 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!