[00:09] genii I am using ubuntu 14.04 LTS [00:10] caliculk: uh, then that version is bogus [00:10] caliculk: in trusty, the version of libssl1.0.0 is [00:10] 1.0.1f-1ubuntu2 1.0.1f-1ubuntu2.18 [00:10] one of those -^ [00:11] 1.0.2g-1ubuntu4 is the xenial version [00:11] Well, I can't downgrade it without breaking several software [00:11] caliculk: i'm not sure how you even installed it if you are on 14.04 [00:12] that doesn't make sense [00:12] Must have some xenial repos [00:12] which is not advised... [00:12] :) [00:12] I don't know, this system has been a handful, and I am just now getting a dedicated server for VMs so I can roll back changes like this if there is a problem. [00:20] hmm, after upgrade from 15.10 to 16.04 I can no longer login as root via ssh??!? [00:22] You should not have been able to log in as root under 15.10 either, unless you changed default sshd settings [00:25] Aren't key-based root SSH logins allowed by default? Or am I hallucinating not having changed the defaults? [00:26] Possibly. [00:27] Xenial has "PermitRootLogin prohibit-password" by default [00:51] sdeziel, strange, I think I disabled that, because I logged in before without key [00:51] now after upgrade it is no longer possible [00:51] the bad thing is, it is not easy to check now, because it is a headless device.... [01:00] Aison: looking at the postinst for SSH, it seems to do the right thing and not touch PermitRootLogin if you had change it from the default [01:02] very weird [01:02] on another device it worked [02:31] How do you delete the trash from your hard drive via the command line? I have a drive that is full and i removed a bunch of files but there is not change and it was atleast three gb of space? [02:32] heh? [02:32] command line has no *trash* [02:32] I tried to use sudo rm -Rf ~/.Trash/* but that didn't work [02:33] i have a 320gb drive that says it is full. I deleted about 3 gb of files but it still says full [02:34] when I run df -h it still says there is no space [02:34] Another odd thing is it is a 320 gb drive but is shows this: /dev/sda1 294G 280G 0 100% /wd320 [02:35] if 294gb is the total and 280gb is used that is not 100% [02:47] Basically I removed files, but they are still taking up space somehow === ubuntu is now known as Guest94310 [03:19] Hello, can I install a GUI program on Ubuntu server, or do I need to use Ubuntu desktop version to allow the GUI program to actually show graphics? === not_phunyguy is now known as phunyguy === not_phunyguy is now known as phunyguy [06:11] ubuntu1604 yes you need x11 [06:12] but u could install that ontop of server === athairus is now known as afkthairus [09:12] maxb: SSH to root is certainly disabled in cloud images. === ChanServ changed the topic of #ubuntu-server to: Ubuntu Server discussion and support | For general (not server specific) support, try #ubuntu | IRC Guidelines: https://wiki.ubuntu.com/IrcGuidelines | https://wiki.ubuntu.com/ServerTeam/GettingInvolved | Docs and resources: https://help.ubuntu.com/16.04/serverguide/ | 14.04 to 16.04 will be offered on July 21st when 16.04.1 is released [14:05] Is access denied to /root? I can't cd to it...tried using sudo to elevate permissions and cd to /root but it doesn't understand the command. Can someone please explain why this is and how to properly access /root? [14:06] MelRay: you cannot use "sudo cd" because "cd" is provided by your user's shell which lacks the required privileges to go into /root [14:06] MelRay: you could inspect /root's content with this though: sudo ls -alh /root [14:07] sdeziel: Ok thanks let me try that... [14:09] sdeziel: So by purpose of design the server edition of Ubuntu is hardened to disallow direct access to /root since it will be exposed to potential attacks from outside? [14:10] MelRay: I believe that /root has always been restricted to root itself, nothing Ubuntu specific there. [14:11] sdeziel: Hmmmm... ok I just never really try to access root so it never came up. Thanks for the assist. === PaulW2U_ is now known as PaulW2U === nitemare is now known as trobotham === warraymos_alt is now known as warraymos === teddy is now known as 18VAASUD3 [16:02] bekks: On functional test made memory leak on FF [16:02] It ate 30-40GB of RAM+Swap in 1-2min and made OOEM Killer on Containers Host [16:26] LostSoul, that is all? [16:33] Yeah [16:34] TBH, I've never saw something like that - I mean FF memory leak that in 1-2 min ate ~40GB of memory [16:34] I limited mem to Containers [16:45] LostSoul: on all of them? [16:57] Going to be seting up a new Ubuntu NFS server. 16.04 or 14.04? Its a real old Dell server, about 9 years old. [16:58] LostSoul: doesn't really matter - probably better NFSv4 support in 16.04, though [16:59] Ill be doing NFS3 [16:59] Most people seem to avoid 4 [16:59] no? [16:59] no [16:59] that's not a single reason to avoid NFSv4 [16:59] s/that/there/ [16:59] Is it not far more complicated? performance? [17:00] performance is better, it supports old host/ip authentication (sec=sys) [17:00] and it supports NFSv3 if you install portmap etc [17:00] So then why have I never encountered an organization using 4? [17:01] Because orgs are slow to adapt [17:01] But not really, just spitballing [17:01] the only reason NFSv3 is still supported is that people are slow to adapt [17:02] RoyK: guessing your v4 user. [17:02] I have a couple of machines used for vmware storage, so they use NFSv3, since we're at ESXi 5.5, which doesn't support NFSv4 (nor IPv6 for NFS) [17:03] Does ESXi v6 support v4? [17:03] yes [17:03] and NFS over IPv6 [17:03] Yeah, see thats discouraging. If massive orgs like vmware are not supporting it yet.. thats going to make my decision easy. [17:03] Okay, cool. Was planning on installing 6.0 on a new server [17:03] lucidguy: newer distros also include support for older NFS versions [17:04] lucidguy: so there's no reason to choose 10.04 or something to avoid NFSv4 support :P [17:04] btw, 12.04 also supports NFSv4, but not as well [17:04] I would go with 16.04 anyways [17:05] what sort of setup is this? [17:05] RoyK: What all? [17:05] It gone on one of them but I hadn't have limits so it killed whole container [17:06] I was only thinking of 14.04 or 12.04 becuase of the old hardware im installing it on. [17:07] lucidguy: usually, new distros support old hardware just as well as old distros [17:07] well, depending on the meaning of 'old', obviously - 80386 support was removed from kernel some years back ;) [17:18] damn === afkthairus is now known as athairus [17:50] Still trying to get 'conjure-up openstack' working. Anyone had success with this? [17:53] stokachu, ^^ [17:54] rstarmer: yea , where you stuck at [17:56] stokachu: The system _almost_ completes, and then complains about not being able to create an external network, and then fails. [17:56] ok can you try from ppa:conjure/ppa [17:56] Sure. [17:56] rstarmer: fixed a bunch of neutron stuff [17:56] needs more testing before I push to the archive [17:57] Ok. Also, previously I ran into an issue where the system complained about a missing ssh key (new system didn't have a local id_rsa), and so that was missing as a pre-requisite. [17:57] should be able to rerun with conjure-up openstack -s [17:57] yea fixed the SSH issue hopefully to [17:57] any info on this bug? https://www.google.nl/webhp?q=marvell+88e8056+sky2+rx+error [18:04] stokach: ok, I just blew away the underlying machine, and re-created. I'll try -s next time. [18:04] would anyone possibly have experience in vagrant and ssh keys? i'm having slight trouble inserting the private key and getting in [18:05] temmi_hoo: try vagrant ssh-config, it should give you the details of what has been set up. [18:05] yes, i'm willing to put in my own keys and this is when things get not that bright [18:06] vagrant ssh-config gives me the info and then i can ssh vagrant@127.1 -p XYZ -i LONGSTRING [18:06] i'd like to get in using vagrant ssh [18:07] now, this all works nicely using the default key system but providing my own keys don't seem to work [18:10] rstarmer: coo lemme know how it goes I want to try and get a fix pushed out tonight or this weekend [18:10] shall do. [18:13] rstarmer: how did you put your own keys in the vagrants? i know it's two lines of Vagrantfile and then the keyfile needs to be present [18:13] i'm already making my own public key to be allowed in the .ssh/authorized_keys file [18:16] nacc: ping [18:16] teward: pong [18:16] nacc: with the php7.0 migration, was a 'default' set of plugins chosen for autoinclusion, and if so where was that documented? [18:16] nacc: just installing php7.0-fpm, for example, misses plugins that a lot of PHP frameworks want [18:16] :p [18:17] and then require manual installation therein [18:17] teward: followed whatever is in debian [18:17] hmm [18:17] teward: our only delta right now is a few fixes for php itself, iirc [18:17] teward: can you give me an example? [18:17] nacc: mediawiki, common wiki software, missingdep: php7.0-xlm [18:17] grr [18:17] damned keyboard [18:18] nacc: mediawiki, common wiki software, missingdep: php7.0-xml, needs manualinstall [18:18] i'm testing all the stuff i use now [18:18] mediawiki's first on the list :p [18:18] teward: yes, php7.0-xml is split out [18:18] nacc: ack [18:18] that's an upstream decision, iirc [18:18] also ack [18:18] teward: so if it's a problem, it might have been something i missed in mediawki's packaging [18:18] and mediawikis hould add a dep [18:18] *should [18:19] teward: mbstring was another common one to need to add [18:19] temmi_hoo: I add my own public key (rather than Vagrant's generated key) via an ansible playbook, so that I can inject the keys I want to use, rather than Vagrant's provided keys. The other option is to create your own vagrant box, which is often less convenient. [18:19] nacc: yeah i've seen that too [18:20] nacc: i don't see mediawiki in the repos, it probably got dropped [18:20] nacc: but when installing from tarballs there's complaints [18:20] so meh [18:20] teward: ack, i think upstream claimed no php7.0 support [18:20] teward: iirc [18:20] nacc: they did - i'm still testing anyways because I can :P [18:20] heh [18:20] nacc: though, the version in alpha has support :P [18:20] supposedly [18:20] yeah, i was focused on released versions as much as i could [18:20] :P [18:20] nacc: indeed. [18:20] nacc: just seemed a little odd that some critical things in a lot of frameworks went missing [18:21] teward: yeah, so this is just the "new" php way [18:21] godforbid what wordpress will yell about xD [18:21] heh [18:21] nacc: ack, wasn't sure if these major changes were documented, but I didn't see a complete set of info in release notes hence the ping :) [18:21] teward: iirc, wordpress in the archive does work [18:21] rstarmer: so you didn't use the Vagrantfile configuration parameters to inject your keys? [18:21] nacc: E:OutOfDate [18:21] recent security patches came out [18:21] so... [18:21] teward: yeah, i should probably have documented more of those changes [18:22] teward: oh ack, all of htat stuff is stale :) [18:22] yup [18:22] it felt like [18:22] temmi_hoo: the inject route works better for me as I don't only use my target systems with Vagrant, but also with other cloud providers, so I needed something that was a bit more flexible than the Vagrant only solution. [18:22] nacc: shooting yourself in the foot sounds like an apt equivalent situation :P [18:23] ahhh okay [18:23] teward: heh [18:23] well i'm not yet using ansible but that's on my roadmap [18:23] teward: that's a good point, though, i wonder if i should have put something in the serverguide [18:25] i had the kind of experience that using ansible to apt install me lots of stuff was taking horribly much longer than doing the same from a shell script - i'm not sure what or why but at that point i went to using just shell scripts for provisioning [18:26] that's all good since now the configuration is more readable to people not in the know of vagrant [18:26] nacc: we can probably still get that updated in the server guide at this point [18:27] nacc: just saying that a little more documentation may be prudent if people come being "WTH DID YOU DO IT BREAKS EVERYTHING IT DON'T WORK WHY DID..." etc etc etc [18:27] now another question regarding vagrant, did rstarmer or someone else have success in running a single specific provision script out of many that happen to use the same provisioner? i gave names to them but am not having great success in selective provisioning [18:27] * teward is already seeing some issues coming up on Ask Ubuntu wrt the transition [18:28] temmi_hoo: I use ansible provisioner, and a single targeted play, so I'm not running into multiples issue(s) [18:28] ah [18:30] nacc: it does seem, though, that the link to my blog post on how to fix the php5-fpm -> php7.0-fpm stuff for nginx upgrades *is* getting read [18:30] so at least some people read up on the docs, thanks for including that link in the ReleaseNOtes [18:31] teward: ack [18:31] * teward sighs, and goes to start figuring out the next nginx merge [18:33] rbasak: remind me what I need to do to reclass some of the new binaries included in nginx's builds in Main again? [18:33] or should I just poke -release about it [18:41] stokachu: Everything is showing up active on the dashboard, but I have a mesage "Neutron not ready yet..." at the bottom of the screen. [18:42] rstarmer: yea give it a few it is still configuring the gateway etc [18:42] thats from the post processing script [18:42] rstarmer: https://github.com/Ubuntu-Solutions-Engineering/openstack-deb/blob/master/bundles/common-openstack/post.sh for reference [18:42] is there any expectations of newtork environment? This is just a single interface machien at themoment. [18:43] rstarmer: you should have a custom bridge 'openstack0' setup on the machine which i expose as a second interface to the containers [18:43] rstarmer: so you get everything setup for you [18:43] did you choose the nova lxd one? [18:44] yeah, I selected the Openstack-LXD variant [18:44] rstarmer: nice, how much ram? [18:44] 16G on this (It's a digitalocean VM), 8 cores, 160G SSD for storage [18:45] rstarmer: ah i had issues trying to run it on digitalocean [18:45] you may have better luck [18:46] in theory it should work [18:47] I can't log on to my server if the network cable is disconnected [18:47] I'm even using a local account (the server was joined to a domain) [18:50] stokachu: I'll let it run till it either times out, or breaks. will let you know regardless. [18:50] rstarmer: if you want i can login and take a look too [18:50] got a public key handy? [18:51] rstarmer: adam-stokes on launchpad [18:51] ssh-import-id adam-stokes [18:51] such a cool tool [18:52] stokachu: try ubuntu@conjure.opsits.com [18:53] cool im in [18:54] rstarmer: gonna kill your conjure-up so i can tak ea look [18:54] K [18:57] rstarmer: neutron subnet-create --name ubuntu-subnet --gateway 10.101.0.1 --dns-nameserver 8.8.8.8 8.8.4.4 ubuntu-net 10.101.0.0/24 [18:57] ah i need to parse better for the nameserver in /etc/resolv.conf [18:57] or just use the first one [18:57] probably just the first is fine, otherwise, I believe you have to double up the --dns-nameserver parameter [18:57] rstarmer: can i comment out one of them just for the time being [18:58] certainly [18:58] it's just the defaults for the deployment anyway [18:58] rstarmer: k ill make a note to fix that parsing [18:59] cool [18:59] rstarmer: if this works for you im going to have to try it again on a droplet myself [19:00] its adding the floating ips now, be just a minute [19:01] I'm happy to re-run again as well. I'll just remove one of the nameservers. [19:01] rstarmer: when this is done ill have you run conjure-up openstack -s and get the horizon IP [19:01] I did create an ssh key as well, is that a pre-req or does the tool address that now? [19:01] sounds good. [19:01] the tool should create one as well now [19:01] ok, I'll drop that from next run [19:02] rstarmer: ive got bugs filed here too https://bugs.launchpad.net/ubuntu/+source/openstack [19:03] rstarmer: go ahead and rerun `conjure-up openstack -s` [19:03] you'll probably want to run sshuttle so you can get to horizon, so like sshuttle -r ubuntu@conjure.opsits.com -r 10.42.154.0/24 [19:03] stokachu: random comment regarding bundles/common-openstack/post.sh, cloud-images.ubuntu.com is available with HTTPS so might as well use it since there is no GPG validation of the root fs/VM image [19:04] sdeziel: thanks ill make a card to get that updated [19:04] sdeziel: wha/?? [19:04] I thought all our tools properly checked signatures? [19:05] sarnold: only on sundays [19:05] sarnold: I'm talking about https://github.com/Ubuntu-Solutions-Engineering/openstack-deb/blob/master/bundles/common-openstack/post.sh [19:05] * sdeziel notes to only work on Sundays [19:08] I wonder why "root.tar.gz" is used instead of the smaller .xz version [19:09] sdeziel: had a bunch of stuff to do before ods, but that's a good point i can fix that too [19:13] rstarmer: lemme know if you can get an instance up and ssh into it, ping outside etc [19:13] stokachu: shall do, just running into a meeting, but will try in ~ 1hr or so. [19:13] sdeziel: ping [19:14] rstarmer: cool ill be around [19:14] sdeziel: would you happen to know much about samba configuration? particularly replacing libpam-smbpass (no longer in 16.04) with libpam-winbind. I'm trying to update the ServerGuide documentation [19:15] * patdk-wk_ just uses sssd [19:15] stokachu: System load at 450? top [19:15] top - 15:14:09 up 1:15, 3 users, load average: 456.48, 168.48, 65.85 [19:15] Tasks: 808 total, 168 running, 640 sleeping, 0 stopped, 0 zombie [19:15] rstarmer: yeaaa that's DO [19:15] nacc: no, sorry [19:15] Ok, will look more later. [19:15] sdeziel: np, thanks [19:16] I have yet to upgrade my smb server [19:19] sdeziel: sure; i'm struggling to find any good migration documentation and was hoping to get serverguide updated today. I also have no windows machines or any samba experience,though ... so :) [19:24] rstarmer: i remember that being one of the reasons it didn't work for me [19:24] rstarmer: it may be worth having one of the server guys look into it since i think DO is KVM [19:27] nacc: the samba release notes have a section on the new smb.conf options that may be used to make it insecure again: https://www.samba.org/samba/history/samba-4.4.2.html [19:28] sarnold: thanks, i'll look [19:52] hi guys [19:52] i have seen on internet this chan for supporting troubles with ubuntu [19:53] there is someone who can give me some suggestions or hints? [19:55] lordknicle: you haven't stated a problem? [19:55] lordknicle: irc works best i fyou just ask questions [19:56] hi guys [19:56] i should create a computer for management of mails === mohammad is now known as Guest61865 [20:10] Help please [20:10] I am getting a kernel panic [20:10] lordknicle: that's still not a question, just fyi [20:11] jvwjgames: using the Ubuntu kernel? can you pastebin the panic? [20:11] I will send pic [20:11] As it would take a very long time to type [20:14] http://picpaste.com/pics/IMAG0492-cmywryJr.1461960853.jpg [20:15] hey all [20:16] i'm having a little problem with LXD and I was hoping someone might be able to help out [20:20] Did you guys see it [20:21] see what? [20:21] The kernel panic [20:21] I posted [20:24] Any suggestions I need this fixed [20:24] jvwjgames: well, the error is pretty clear, you're somehow missing an executable /sbin/init [20:24] How do I fix it [20:26] I can't even get into recovery mode [20:28] can you mount the livecd iso? might be able to fix stuff then [20:28] coreycb, we need to tinker with the layout of the horizon package - its an ugly beast [20:28] jamespage, are you talking theme-wise? [20:29] coreycb, yeah - two thoughts [20:29] ya there's buggyness with the buttons on the ubuntu theme [20:29] currently we generated stuff during package install directly into /usr/share/openstack-dashboard [20:29] it works but its not great [20:29] should be /var/lib/openstack-dashboard [20:30] I also think we could ship the ubuntu them in the main package and just make it an end-user configurable option [20:30] operators can disable it if they want by dropping it in local_settings.py [20:30] koaps, latest horizon? aware of some niggles that design are working to resolve atm [20:31] koaps, oh and what's you prob with LXD? [20:32] jamespage: trusty/liberty, can't even deploy xenial/mitaka cleanly yet [20:33] jamespage, ok. I'll look into that. any rush on this or just newton time frame? [20:33] jamespage: think I found my problem with lxd, seems when I'm creating and starting lots of containers, cloud-init isn't keeping up, some of the contianers are getting IP's but their names are wrong, and ssh isn't work(key issues) [20:34] I split create and start into different scripts and added a sleep 1 to the start script [20:34] seems to have fixed it [20:34] not ideal [20:34] but I can ssh to everything now [20:41] koaps, the liberty nova lxd driver is very beta quality [20:41] mitaka was out first ga [20:41] jamespage: oh I'm not using lxd in openstack [20:41] i'm using to create the container to run openstack [20:41] koaps, ok can you explain your use case? [20:42] koaps, so running openstack in containers? [20:42] ya [20:42] pretty the normal juju deployment for openstack, I believe [20:42] koaps, one sec [20:42] pretty much I meant [20:42] koaps, with MAAS ? [20:42] koaps, or all in one? [20:43] we use MAAS for the physicals, so ceph, compute and gateway [20:43] everything else is in a container [20:43] right - ok [20:43] koaps: so you're using Juju 2.0 with MAAS 1.9? [20:43] not yet, 1.25 [20:43] MAAS 1.9 [20:43] we use manual environments [20:44] so do juju add ssh:.... [20:44] for physicals [20:44] koaps, right ok so that still using the old lxc tech stack - not a problem but wanted to check [20:44] sure, the thing I'm changing now is [20:44] xenial lxd server and pre-creating the containers [20:44] instead of juju-deployer using lxc:3 type of to targets [20:44] and then adding the containers to juju manually? [20:44] yup [20:45] its because we have a kinda nasty network topology and needed to pre-create a overlay network for openstack [20:46] koaps, so your problem is with creating the lxd containers themselves by the sounds of things? [20:46] koaps, are you using static network configuration? [20:46] or dhcp? [20:46] yea, I had a script that was looping though 21 contianers I need to create, it was launching and starting one by one, with a second delay between each [20:46] sorry man [20:47] just got out a lil bit [20:47] but a few wouldn't come up right [20:47] jamespage: ya, the whole thing is static Ip's from dnsmasq and I set the MAC address on the containers [20:48] koaps, ack [20:48] koaps, you've gone a bit beyond my knowledge of LXD now - stgraber might be a better target for your problem :) [20:49] I don't have the log any more, but SSH was complaining about its keys being wrong or something on the containers I couldn't connect to [20:49] and the host names were ubuntu, so I figured cloud-init wasn't working right [20:50] jamespage: no prob, it seems creating and starting needed a bit of distance from each other [20:50] I can always replicate it and get logs if anyone wanted them [20:50] koaps, might point to a bug which is why I ping stgraber - he's the tech lead on LXD [20:51] koaps, #lxcontainers is also a good place to get help on lxd btw [20:51] oh ok cool, I looked at the ubuntu support for for what channels there were and didn't see a lxd/lxc one so figured I would try here [20:51] s/for/page/ [21:03] koaps, jamespage: would be interesting to look at the cloud-init log to see what's going on. I've never seen such behavior even in my benchmarks [21:25] anyone used conjure-up and know how to uninstall openstack, so i can re-run the 'conjure-up openstack' ? [21:41] virtualguy: you could grep through /var/log/dpkg.log forpackages installed after the time you used it, and apt-get purge those packages [21:48] stokachu: while the load has dropped back to a more manageable level, there's still something borked... Currenly the mysql process has ended up in error state with 'hook failed: "config-changed"'. [21:56] sarnold: thanks for merging the sshd AA profile! [21:57] sdeziel: you're welcome; thanks for doing the work, and the reminders :D [22:06] tried that but conjure-up was just hanging doing nothing. hitting ctrl-c would then bring up the install 'gui' but it wouldn't go any further. just reinstalled 16.04 since it was a fresh machine anyway [22:07] i bombed out of the install because I wanted to change the location of the lxd containers but it seems conjure-up is a one hit kinda thing, i couldn't even find a log file! [22:34] teward: technically an MIR, though if it's something that should be covered by an existing MIR then you can ask an archive admin in #ubuntu-release. [22:34] (or file some kind of stub MIR with an explanation) [23:15] How do I copy files from a live CD to a hard drive on the same system [23:15] rstarmer: is it still up? [23:16] jvwjgames: cp(1) [23:16] Where do I do that at [23:17] virtualguy: depending on what you 'conjure-up' determines the log file, so like 'conjure-up openstack' would be in /var/log/openstack.log [23:17] run "cat /proc/mounts" to see what is mounted where [23:17] virtualguy: right now to uninstall openstack you would run 'juju destroy-model default; juju destroy-controller ' [23:17] which you get from juju list-controllers [23:18] virtualguy: i agree i still have documentation to write on all this [23:18] How do I use cp [23:19] jvwjgames: https://askubuntu.com/questions/195983/how-to-copy-files-via-terminal [23:19] jvwjgames: also these general support questions should be asked in #ubuntu [23:19] OK sorry [23:19] np :) [23:22] virtualguy: bug please file bugs on things that aren't clear, or lacking so I can get them addressed