[00:30] Jimmy06: did you try running again ? [00:30] there a good shot it will work after a reboot. === pleia2_ is now known as pleia2 [02:37] Hello all, i need to build a linux server for build purpose , i have some issue with Ubuntu 16 "server" ISO - boot with UEFI on HP DL 360 Gen 10 box with RAID 0 ssd disks- Ubuntu 16 "desktop"cd works well , so does it matter much if i use server or desktop? [02:42] ansibleesxi666: you can install desktop and convert it to a server installation, but i wuld recommend installing the server installation instead (since converting isn't that easy). by the way, you are probably referring to ubuntu "16.04 LTS", there is no "ubuntu 16" [02:43] ansibleesxi666: what is the error message you run into when you try to install using the server installer? [02:43] tomreyn: all of this has been discussed in #ubuntu where they asked this same exact question [02:43] oh, maybe ansibleesxi666 should refer to those answers then [02:45] yes 16.04.05 is what i am using , i see a strange issue, i have 3 SSD disk on box, each with 1 TB, we want to use raid 0 for perf, when i boot box with ISO of server the OS install shows disk at 600 GB, when i try desktop iso it shows correct size of 3 TB [02:46] disk as 600 GB i mean [02:46] as i was in rush i used desktop ISO & build them for now [02:48] i did google as it says both desktop & server use the same Kernel so i went ahead with Desktop [02:49] ansibleesxi666: so did you ask these same questions in #ubuntu before? [02:51] i did , i did not got answer so i am here [02:52] leftyfb seems to suggest it was diuscussed there before, maybe you missed some reponses (left early)? [02:52] !irclogs [02:52] Official channel logs can be found at https://irclogs.ubuntu.com/ . LoCo channels are now logged there too. Meetingology logs at https://ubottu.com/meetingology/logs/ [02:55] if you really got no replies (please point me to when it was previously dsicussed) i'll be happy to go over it with you again. === Raboo_ is now known as Raboo [03:13] let me check [03:13] No i did not got respond what i was looking for [03:14] the last 2 conversation was :- [03:14] (7:41:48 PM) leftyfb: ansibleesxi666: when one of those drives goes bad, you will lose all your data [03:14] (7:42:33 PM) dxb left the room (quit: Ping timeout: 245 seconds). [03:14] (7:43:26 PM) ansibleesxi666: our build team wants less time & they do not care of the data on the build box as the actual build goes in central git repo.... these build nodes are purly for compuete ... but starnage issue is why server iso shows disk size as 600 GB & not 3 TB [03:14] brb [03:30] i did more google & i think i have work-around ie in my case a desktop iso or server will not impact much as the core kernel is same [03:31] thanks for your time [05:05] good morning [06:03] Good morning [07:38] smoser: I tried more then 10 times with the same config [09:58] Has anyone experience with on-prem landscape ? [11:17] muhaha, what's up? [11:17] @waveform did you try to dockerize this big thing ? [11:18] muhaha, there's juju charms for it but I don't think there's any (official) docker images [11:18] isn't containerizing it.... contrary to its purpose? [11:19] Why? [11:19] not necessarily - containers don't *have* to be many to a machine [11:20] for instance, the juju charms are pretty flexible - they typically set up one machine for haproxy, another for pg, etc. etc. - now those might be "real" machines, or they could just be containers [11:20] I will try to dockerize it, but I will have to understand to this landscape thingy.. [11:21] I dont understand how to start this.. For example -quickstart package is useless for this. I will need to use landscape-server [11:22] you're probably better off looking through what the quickstart script actually does - but trying it out on juju will give you a better idea of what a production setup really looks like [11:22] (we don't recommend quickstart for scalable production deployments) [11:23] muhaha: in that it has to control the entire machine and dockerizing it is isolating it? or am I misunderstanding something here [11:28] I dont need to control host machine [11:29] I need to control other ones... [11:29] That is why I need some gateway... I does not matter if its running on bare-metal or in container [11:30] oh, I wasn't aware you could install the server on premises. I thought it was strictly SaaS. [11:34] unfortunatelly there is no alternative for ubuntu (opensource) to manage other servers :( So I will have to use landscape. Foreman can not handle this afaik [11:35] *selfhosted [11:41] muhaha: there's always salt stack [11:47] of course there are also ansible and chef [11:49] ansible is extremely too slow crap. good for simple setups but as you scale up in complexity of config, it's way way way too slow. [11:50] but the point being, if you want analogous to landscape then saltstack and chef (and puppet) is more appropriate because of the client-server model and ability for clients to trigger events between them. === MannerMan_ is now known as MannerMan === jdstrand_ is now known as jdstrand [14:33] Jimmy06: are you running the 18.04.1 release of the installer? [17:12] So I have a 10G fiber connection that im trying to use for the physical connection to an OpenvSwitch. I need the host to have access to the same interface to serve as a NAS on that port. Im having problems with assigning a static ip address. DHCP works fine but isn't an option, the DHCP server will be a pfSense KVM. I've tried both ports in DHCP and they work fine. I tried both with [17:12] static configs and I can't get a DNS. I can still ping 8.8.8.8 but not google.com. Im sure its just a simple mistake caused by my lack of knowledge. Here is my interfaces file: https://pastebin.com/5v9YvE1p [17:24] tekgeek1205: what version of ubuntu? [17:24] 18.04 [17:24] server, i reverted from netplan back to ifupdown [17:28] tekgeek1205: you can talk to `systemd-resolve` to see how it's resolving hostnames? [17:28] tekgeek1205: what is in your /etc/resolv.conf? [17:29] checking [17:30] nameserver 127.0.0.53....... so its not getting a DNS? [17:30] tekgeek1205: that's systemd-resolve [17:30] *systemd-resolved [17:30] tekgeek1205: so you need to ask `systemd-resolve --status` what it is using [17:31] Ah, I was curious how to get systemd-resolved to spit that out. [17:31] do you want that in past in? [17:31] tekgeek1205: yeah, that's probably useful [17:31] the first was the resolve.conf [17:32] tekgeek1205: or you can just read it, to see what for that iface is listed as 'DNS Servers' [17:32] Not sure why they don't populate and maintain a text comment in /etc/resolv.conf since that's where people are going to look. Or even just a comment in that file telling people how to dig out the relevant status. [17:32] mason: i mostly agree with you :) [17:32] nacc_: I skipped the obvious first pick, "don't do that". [17:33] https://pastebin.com/1QUQReM7 [17:34] tekgeek1205: is that *all* you are getting? [17:34] there's usually other lines than just that [17:34] yeah with a fresh boot and a static address.... [17:34] > static address [17:34] did you set DNS record data in your netplan config? [17:35] and if so, what is it? [17:35] no im using ifupdown, netplan and openvswitch are incompatible [17:35] then did you set dns-nameservers in ifupdown? [17:36] yeah....https://pastebin.com/5v9YvE1p [17:36] thats, my interfaces file [17:36] dns-nameservers in ifupdown don't negate systemd-resolved jumping in [17:36] My hope is that it uses the interfaces information, but I'm not sure [17:37] it might not be doing that properly, that's a systemd headache though. [17:37] you can force your system to use the other DNS resolvers, but you'd have to fuss around with some ResolveD config files to do it [17:37] For my part, I found that purging resolvconf helped. [17:38] I haven't tested all permutations. [17:40] i don't have a 16.04 in front of me, but systemd-resolve --status, should be reporting a per-link entry, i thought [17:42] tekgeek1205: mason: teward: "dns-nameserver" only works if you have resolvconf installed [17:42] we also don't install that by defualt, because resolvconf and systemd-resolved both want to be authoritative for nameserver info [17:43] cyphermox: so if you're using ifupdown with systemd-resolved how do you pass it the DNS servers to query via ifupdown configs? [17:43] or would that be a manual step called by `up` in the config? [17:43] cyphermox: thanks for that info! [17:43] teward: nothing you write in ifupdown is pased to systemd-resolved. [17:43] or anywhere in systemd for that matter [17:44] if you use ifupdown, you need to write resolv.conf yourself, or add your DNS in /etc/systemd/resolved.conf (the DNS= line) [17:48] tekgeek1205: ^ [17:48] cyphermox: is this documented anywhere? [17:48] cyphermox: Ah, I must be thinking of my "funny upgrade" I did last week then. [17:49] waiting on a reboot then, i'll try changeing /etc/systemd/resolved.conf [17:56] Thankyou guys!!! DNS is working! This is my first big projest with linux.I was about to give up and all back to linux brides' untill i could get a 2nd 10gb uplink card for my switches. Im still a bit green in the linux world. [17:58] tekgeek1205: \o/ [18:05] now I can go on my merry way setting up containers and vm's. 10gb from my workstation to my server has been a dream for years!!!! Time to put that ZFS array to work! [18:10] zfs?! oh no! [18:11] no? [18:11] jk :) [18:11] ah :) [18:20] its also the root FS......that was fun [19:41] do you guys have any recommendations of what is normally used as a "file server" [19:42] like when one of your users logs on to a server via ssh [19:42] the current directory where they land is actually a mount to a different server [19:42] is that normally done with just a samba server? any other more popular options? [19:44] from unix to unix I'd just use sftp [19:44] and let them connect to the system hosting actual files, no network filesystem use [19:45] oh, I meant like if I ssh into serverA which is an application server [19:45] I am taken to a directory that is actually part of a mount to serverB [19:45] oh, do you want users to have a $HOME on a shared file server [19:45] +1 [19:46] I've seen that done in the past [19:46] and I'm curious as to how that is normally done (especially if the backend file server is redundant) [19:47] we don't do that at all. NFS v4 supposedly has all sorts of nifty features for that, including clustered nfs, but i have no idea which features actually work well and are reliable [19:47] thanks jelly [19:48] so, related to this... are there any recommended file server clusters? [19:48] in our company, many of the M2M processes just move files around [19:48] I'm looking to find a way to store those files on some kind of redundant system in case that I have to do maintenance or upgrades to that resource [19:53] So....we DO do that with NFS v4 [19:53] its not trivial to set up [19:54] so what you want is a clustered FS, which, is also not trivial [19:54] there are a few ways to do it [19:55] and none of them are simple [19:55] there are also those proprietory storage clusters which can export r/w via nfs [19:55] ^^ [19:56] I bet netapp is simple! [19:56] yup...those are the simplest, although more expensive [19:56] ... [19:57] glusterfs? ceph? do any of those work not horribly slow? [19:57] they are not made for speed [19:57] they are made (in theory) for resiliance [20:01] i never used either, but would expect them to work out in this use case, since they are also used for storage backends in clouds [20:02] (so there must be ways to configure them to not be super slow) [20:02] don't sotrage backends in clouds just expose objects that are then used as blockdevs [20:09] They can both be used in this case, but neither are trivial [20:11] I've heard of netapp [20:11] Ussat, what would you say is the advantage of having a file system cluster? [20:11] yes...netapp is a thing [20:12] yeah, I read about glusterfs and was going to try it out in a virtual lab [20:12] OK, first you need to differentiate between a cluster enabled FS and a clustered FS [20:12] what do you want to do [20:13] the reason I ask, is I work with systems that need to have a VERY high avaliability [20:13] so, I'm looking at this from the perspective of: normally a server is the file server [20:13] well, if I need to do maintenance on that server or it breaks, then all the applications will bork [20:13] so you need a cluster [20:13] so, I thought... a cluster of servers would take care of that problem [20:13] correct, it can [20:14] so, if serverA goes down, serverB will continue to service whatever the apps need [20:14] but that is different than a load spread FS [20:14] cluster [20:14] what is the difference between enabled and the other option? [20:14] you want active <<----->> active [20:15] and they will share a FS, so when one dies, it releases the lock on the FS and the other picks it up [20:15] oh ok, so this is not like serverA and serverB are constantly synchronizing data between them? [20:15] Well, you can have that, but its different [20:15] who hosts the FS? [20:15] 'cause then what happens if the FS server goes down? [20:16] btw, I know some of my thinking sometimes will never happen [20:16] Generally the FS is hosted on both [20:16] and shared [20:16] so, just yell at me if I'm thinking the wrong way [20:17] What's driving the need for the sharing to happen at the filesystem level? [20:17] DammitJim: make sure you rebalance the filesystem often [20:17] file processing and hosting [20:17] the glusterfs [20:17] rebalance? oh gosh [20:17] so there are a few ways....what WE do, is we have a HUGE isilon that is the FS, which is replicatedbetween datacenters [20:17] DammitJim: it does it for you, there's a command you can use. gluster volume rebalance start [20:18] DammitJim: I am very fond of gluster :( [20:18] :) * [20:18] Your needs might be better met by using an object store for the files, where you have an API that you use to push and pull files. [20:18] :) or :( ? [20:18] ^^^ [20:18] DammitJim: :) [20:18] Because then you just load-balance the service in the usual way you would load-balance an HTTP service. [20:18] yup [20:18] xrandr, and you have gluster clients that mount those resources? [20:18] we have VERY different needs [20:18] DammitJim: yes. [20:19] Ussat, I appreciate you sharing what YOU do [20:19] I work at a hospital where shit has to always be avaliable [20:19] DammitJim: I use it for my new business. I now have 8.1 TB between 3 servers [20:19] ours is a multi million dollar setup [20:19] multi milllion in infrastructure? [20:19] you sound like Orlando Health [20:20] xrandr, how much storage do you have on each server? [20:20] No, not in Orlando. We are a major research hospital/University [20:20] btw, isn't it weird how many layers we have to take into account for "reliability?" [20:20] DammitJim: If you're going to use gluster, you need to determine the setup you need. Do you want file replication (mirrored drives), or one huge drive? [20:20] oh yeah, US SAT [20:20] replication in my case [20:20] DammitJim: I have two servers with 1.4TB each, and another server with a 6T drive dedicated to gluster [20:21] ok, replication is pretty simple [20:21] so, glusterfs is pretty good? I can deal with the "slowness" [20:21] it can be tuned [20:21] DammitJim: there's ways to speed that up. Network compression, etc. [20:21] yup [20:22] DammitJim: also, depending on your needs, I would recommend using the ssl option with it. If your data is sensitive, let it be encrypted [20:22] we also have to deal with encryption in flight and at rest [20:23] yup, I'll need to do encryption [20:23] Ussat: doesn't the SSL option for the volume handle that? or is that just transmission between the servers? [20:24] xrandr, it might, but we use special encryption accelerator cards [20:24] DammitJim: I'm gonna go out on a limb here and say you know all your server's specs, right? [20:24] xrandr, so, I should probably just test this in a lab, then [20:24] no, I can't say I know the server specs [20:25] DammitJim: absolutely! [20:25] xrandr, we are talking almost a petabyte of data :) [20:25] Ussat: Which filesystem do you use at the server level? ext4 or xfs? [20:25] and the cluster only needs to be something like 1TB [20:25] and we'll probably only use about 1/3 [20:26] DammitJim: I didn't have a lab to test it on, so I just went live with it and worked it as I went [20:26] The FS is on this: https://www.ibm.com/us-en/marketplace/flash-storage-virtualization [20:26] but we know we are growing and that should be able to keep us working for about 2 years [20:26] and its XFS [20:26] that looks pretty [20:27] that's nice that the 1st thing they say is "Save money" [20:27] we have one in each data center, maxxed out [20:27] its sales, of course they do :) [20:27] 'cause you are saving money, but it's more like buying insurance [20:27] DammitJim: There's #gluster if ya need anything :) [20:27] thanks xrandr [20:27] and if I'm aroumd I'd be happy to answer any questions [20:27] again, xrandr you have to install gluster clients on the other servers that want to mount the file system, right? [20:27] thanks [20:28] glusterfs will do what you want, the Uni proper uses that [20:28] DammitJim: no, you need to install gluster server on every server you want to add to the gluster volume [20:28] sorry i misread that [20:29] Gluster client on any machine that wants to mount the gluster volume. Gluster server for any server that wants to contribute to the volume [20:29] yup, got it [20:29] do the gluster servers have to be the same for replication? [20:29] same, meaning ? [20:29] how do you address the cluster? floating IP? [20:29] You'll have one master server, and a bunch of slaves [20:29] same like... same storage, same RAM, same CPUs [20:30] the one server is the initial volume [20:30] then you add another to it from the first server, and so on [20:30] DammitJim, well, they should of course [20:30] DammitJim: They should, but it's not necessry [20:30] oh yeah 'cause xrandr is not doing replication (6TB + 1T + 1T) [20:30] but you might hit some performance issues [20:31] DammitJim: my biggest suggestion is to make sure they are all of the same network speed [20:31] Don't have 100MB cards in some, and 1GB cards in others [20:31] so, like 1GB and stuff [20:31] for some reason, gluster gets unhappy with that. [20:31] got it [20:32] Or at least it did with me [20:32] DammitJim: what version of Ubuntu server are you running? [20:34] Ussat: how big is your Gluster volume? [20:34] * xrandr is curious [20:35] right now? please don't ask [20:35] I have a ton of 14.04 [20:35] but I would probably set this up with 18.04 [20:37] Good choice. Do a little research... the packages that are bundled with ubuntu-server for gluser are apparently EOL. There's repos out ther with the 4.X which are still in production phase and are supported [20:37] oh really? [20:38] man, one can't win, huh? everything is EOL these days but that's probably because we haven't been able to keep up with our processes [20:38] DammitJim: yeah. I also have a bone to grind with the folks who wrote the gluster documentation. When they stated it should be backwards compatible, they really needed to specify between which versions [20:38] i had a 3.5 and a 3.12 i think [20:39] They did not like each other [20:39] oh yeah, one would think that one can upgrade 1 node and then the other [20:39] I"m sure there are some limitations [20:39] Oh there are. You can only have a gap of about 3 or 4 versions [20:40] DammitJim: and you can upgrade one node then the other. Just make sure you keep up with it. [20:40] so that the versions don't fall too far apart where you can't do that [20:40] I'm actually in a pickle with rabbitmq and erlang because I let it go too long [20:41] DammitJim: you also need to figure out which communication protocol you want to use between the servers (bricks). There's TCP, UDP, and RDMA. I use TCP [20:43] Really read through the gluster docs and figure out how you want to set things up [20:44] some things are changeable, some things are not [20:45] do you have the servers directly connected or going through a switch? [20:45] DammitJim: 2 are in the same datacenter, 1 is in another [20:45] so 2 connected directly to each other via crossover cable, and another is on a 10GB connection [20:45] I think in my case actually, they are going to be in the same Cisco UCS [20:46] UCS? [20:47] Cisco Unified computing [20:47] Is that Cisco [20:47] si [20:47] version of Amazons cloud computing? [20:47] no [20:48] ( hit enter instead of ' ) lol [20:48] these are blade servers basically [20:48] UCS is cisco's "hyperconverged" hardware server platform [20:48] jelly: hmm. I've not heard of that before... gonna have to do some researching [20:49] I think it's existed for like 10 years now, they're up to what, 3rd-4th generation? [20:49] 4th [20:49] maybe 5th (my system is 2 years old) [20:50] SANs backend and VMware stuff [20:50] jelly: i am clearly out of the loop on that stuff :) A guy I used to work with several years ago put a bad taste in my mouth as far as Cisco went [20:50] Cisco is a royal pain in the bottom [20:50] DammitJim: your restraint is admirable [20:50] a company that I can't say enough of... is Nimble Storage, even though now they are HP [20:50] these seem to be just x86 brand name blades with some interesting features [20:51] HPE bought Nimble? [20:52] yeah... beginning of the year... [21:00] UCS's are fucking GREAT [21:00] you think so Ussat ? [21:00] we have a few 4th gen [21:00] oh yea [21:01] all my VM;s are on them [21:01] do you have the luxury of running a vcenter with dozens of ESXi hosts? [21:03] Well.....actually. I just run the *nix side of things, but yes [21:03] DammitJim, again, what we do is a pretty sizeable enterprise [21:03] cool [21:04] man, what a pain this erlang stuff is [21:04] DammitJim, we have a little over a petabyte in storeage right now [21:04] and we need to keep stuff ...forever almost it seems [21:05] forever is like a looong time [21:05] yes [21:05] medical records need to be kept a long time [21:05] hey guys, how do I perform an upgrade of erlang from version 20.1-1 to just 20.3-1 even though the candidate is 21.0-1 [21:06] DammitJim: is 20.3-1 available via `apt-cache policy` ? [21:07] yes [21:07] and none of those numbers match ubuntu package versions [21:07] DammitJim: pastebin it? but you can do `sudo apt-get install erlang=` [21:08] that's it, nacc_ !!!t [21:08] thanks!@ [21:09] oh gosh, I didn't realize there are a bunch of erlang packages like erlang-ic, erlang-gs that I don't want to automatically upgrade to 21, but want to keep it at 20.3 [21:11] or how do I tell ubuntu that when I do an: sudo apt-get dist-upgrade [21:11] I don't want erlang packages to be upgraded to 21.0, but to 20.3 [21:12] something opened the floodgates to recommend that candidate which in my case is not compatible with rabbitmq [21:12] ugh, gotta run [21:12] see you guys [21:12] !pinning | DammitJim [21:12] DammitJim: pinning is an advanced feature that APT can use to prefer particular packages over others. See https://help.ubuntu.com/community/PinningHowto [21:24] 18.10 beta is out right? [21:26] Epx998, beta is not out for Cosmic https://wiki.ubuntu.com/CosmicCuttlefish/ReleaseSchedule [21:29] http://cdimage.ubuntu.com/daily-live/current/ isnt a beta? [21:33] as the url indicates, it's a daily build [21:34] i.e. potentially broken, unstable, pre-release [21:35] (the download web page also says "daily build") [21:37] Beta freeze isn't until Sept 27th anyhow for 18.10 [21:38] ( according to https://wiki.ubuntu.com/CosmicCuttlefish/ReleaseSchedule ) [21:59] yeah thats ok, some dev want to test some gpu stuff on it [21:59] thanks for the info