• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Linux Distro Noob thread of Linux noobs

Judging by the specific error message that is reported when NeoGAF goes down, the server is a Commodore Amiga.

"Guru Meditation"


But seriously, it openly used lighttpd/1.4.15 on Linux until late last year. It now reports its server software as "NeoGAF" in the http headers.

I'm curious… what would be the most appropriate route to making the website more scalable, other than "lighttpd sucks, use ${NEWFANGLED_WEB_SERVER} instead!"
 

Pctx

Banned
As I typed up last night but GAF ate because of errors...

I'd like to see the 200 get requests to see how the server is handling the load. Obviously any of us can arm chair admin when it doesn't cost us anything..... Let me say that again... We can all arm chair admin the site when it doesn't cost us anything. EviL's got a hell of a job to keep this running among other things going on with his life.

From the list of online users, I would venture that the TTL and KeepAlive's are insanely large since basically each session is live until each user leaves the site. The other thing that I noticed with the new server is it is running Varnish which... helps with static caching but to my knowledge of reverse proxies, there are some trade offs with site visitors and the content that keeps changing so rapidly as it does on GAF.

I'm currently looking into how NGINX does reverse proxy as it is different from Varnish but I'm not going to have a good understanding of that till end of summer at the earliest. I've heard that httpd is about as lightweight as NGINX so I doubt moving would make much sense unless Evilore had someone to manage the server of which I'm not sure he does.
 

Vanillalite

Ask me about the GAF Notebook
PREFACE: I don't want this to come off badly at all. I just think super highly of Linux-GAF and some of you are really smart and really helpful. I get anyone can be an armchair admin. Just figured everyone here is sooo awesome and helps all of us n00bs out. Why not help out GAF as a whole with this. If I'm out of bounds I apologize.

There's a wonderful blog post from Jorge Castro on how OMGUbuntu was getting take down by to r visitors, and how they moved the whole site in a day to Amazon Web Services, and using some of the never server stuff they could clone instances of the site on the fly and run them.

AWS allows you to put your info in an elastic setup that lets you scale up and down on a pay perusage type basis in terms of server capacity. I see no reason this type of setup couldn't also be used for GAF. The only caveat being EvilLore wants to keep everything ever done and available all day everyday.

Still if you could create a charm that even clones the last month or something of posts just when we need extra capacity that would be fine for most people IMO verses GAF being down.


Blog Post by Jorge Castro!
 

Pctx

Banned
No I think you're spot on Brett. AWS is a good solution for some, not a great fit for others. Size and scale on AWS is awesome but could be cost prohibitive for GAF. Anyways, we'd probably need more information from Evil on the stats of the site to see what type of server we're currently on vs. what we could get.

My general experience has been give yourself enough overhead but then tweak the webserver down to the where you're maximizing the system in use. The allure of AWS (I've found from other admins) is they simply up the RAM which in terms of developing the web app isn't fixing the problem. I would like to curb the 500 errors though as I know enough about web engines now to at least look at what we've got.
 
I read the OMGUbuntu post a while ago (perhaps linked from here?). I don't know how applicable it is to NeoGAF— a blog has much higher read:write ratio than a forum, so would benefit much more from caching and easily adding servers to read from the cache that AWS or Azure (or whatever) would provide. I guess you could still do it, with an AWS instance handling post creation/revision since that's more processor/RAM intensive than serving static cached files.

Off the cuff, I think I would (and no idea if any of this is being done now or not):
1. Cache the topic listing, refresh on a clock. Get rid of the user list.
2. Cache topic pages other than the most recent (force everyone to 50pp to simplify this), have forum software flush relevant cached page as edits are made to contained posts.
3. Cache the most recent page for non-logged in sessions, refresh periodically.
 

Vanillalite

Ask me about the GAF Notebook
Problem is quickly caching all of the frick'n image heavy threads. Seems like they would be a bitch to handle and probably like 100 times the size of mostly text threads.
 
Problem is quickly caching all of the frick'n image heavy threads. Seems like they would be a bitch to handle and probably like 100 times the size of mostly text threads.

Wouldn't that be really easy on account of the images not being on NeoGAF servers?

I mean, avatars are currently, but not stuff in img tags.
 

BTMash

Member
As I typed up last night but GAF ate because of errors...
I'm currently looking into how NGINX does reverse proxy as it is different from Varnish but I'm not going to have a good understanding of that till end of summer at the earliest. I've heard that httpd is about as lightweight as NGINX so I doubt moving would make much sense unless Evilore had someone to manage the server of which I'm not sure he does.

Well, NGINX would be a reverse proxy to whatever you want. You can have nginx act as a load balancer between servers so you have someting like:

Code:
http {
	upstream hacluster {
		server 1.2.3.4;
		server 5.6.7.8;
	}

	server {
		listen 80;
		server_name example.com;
		proxy_pass http://hacluster;
		# Bunch of other niceties.
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header Host $http_host;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	}
}
And you effectively have a load balanced web server. You can then use caching/whatnot as well if you'd like. Or you could just have varnish send the request off to nginx which will do the php handling. Which is *also* a reverse proxy request to a php server. You need to have a cgi/fastcgi process running which nginx can send the proxy request off to. My recommendation is to look at installing PHP-FPM which manages php in a fastcgi process (and does it very well). Then your server config might look something like the one I use for drupal at http://wiki.nginx.org/Drupal. With php-fpm, you can have it running from a socket and set my upstream as php:
Code:
	upstream php {
		# server unix:/tmp/php-cgi.socket;
		server 127.0.0.1:9000;
	}
So when it comes to configuring your site, you just have something like
Code:
location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_intercept_errors on;
                fastcgi_pass php;
        }
for your sites and don't have to specify where to find the php process each time.
 

EviLore

Expansive Ellipses
Staff Member
I swear Linux-GAF needs to help EvilLore get a better host. I don't see why with as big as this site is he doesn't have something like openstack or aws to host. Then he can use say amazon's services to bump up or down the traffic needed almost on the fly with the right Linux tools.

No reason in the world he shouldn't be able to have a scalable forum that he can double traffic on whenever he knows there is going to be a ton of traffic like say for the E3 conferences which he can then dial back to normal in just a few hours.

The recent downtime stuff isn't because of traffic overload. We're having some strange issues under the hood.

We're working on a fully redundant cloud-based array for this year though that should prevent the site from ever really going down at all except in really catastrophic scenarios, but that takes time to implement.
 

Vanillalite

Ask me about the GAF Notebook
The recent downtime stuff isn't because of traffic overload. We're having some strange issues under the hood.

We're working on a fully redundant cloud-based array for this year though that should prevent the site from ever really going down at all except in really catastrophic scenarios, but that takes time to implement.

Thanks for the reply. I've been thinking about this for a while though because even beyond the current issues I've always wondered about GAF and the load balance.
 
Does anybody know of some good raster rip software? My brother wants to make huge prints, poster size, but needs software to adjust the image and the max print size. Or does Linux and Ubuntu not have a max print size
 

Pctx

Banned
The recent downtime stuff isn't because of traffic overload. We're having some strange issues under the hood.

We're working on a fully redundant cloud-based array for this year though that should prevent the site from ever really going down at all except in really catastrophic scenarios, but that takes time to implement.

Evil, so is this more VB code related and not host related? If that's the case, apples and oranges. :)
 
Does anybody know of some good raster rip software? My brother wants to make huge prints, poster size, but needs software to adjust the image and the max print size. Or does Linux and Ubuntu not have a max print size

Isn't "raster rip" a bit redundant? ;)

I'm interested in hearing more about this. It sounds like you're asking two separate questions. I don't know much about RIP software (other than that ghostscript is an example of such), but I do know that pretty much any raster editor can allow you to scale to an arbitrary page size, and some of them allow you to do page tiling.

The impression I get is that support for actual page sizes is a printer driver thing.
 
Isn't "raster rip" a bit redundant? ;)

I'm interested in hearing more about this. It sounds like you're asking two separate questions. I don't know much about RIP software (other than that ghostscript is an example of such), but I do know that pretty much any raster editor can allow you to scale to an arbitrary page size, and some of them allow you to do page tiling.

The impression I get is that support for actual page sizes is a printer driver thing.

What are some of these editors?
 
What are some of these editors?

In Krita, "Image → Scale to New Size → Print Size".
In GIMP, "Image → Print Size"
In WhateverOffice* Draw, "File → Print → WhateverOffice Draw → Size".

Beyond all that, it depends on whether or not the printer drivers support whatever custom page size you're working with. I think this is the same in pretty much any operating system.

If you have ImageMagick installed, you could type

Code:
[url=http://www.imagemagick.org/script/identify.php]identify[/url] -format "%[fx:w/72] by %[fx:h/72] inches" document.png

to find the current print geometry. ImageMagick doesn't let you directly specify geometry in terms of inches, but you could always just figure out how much bigger it needs to be, then:

Code:
convert document.png -resize 200% newdocument.png

That'll result in something double the size. Or maybe use "-density 50%" instead, which I think will keep the pixel geometry the same but double the print geometry. People knowledgeabler than I should pick it up from here.



In case you're going for page tiling, this place discusses different software that can allow for taking an image and splitting it up among multiple pages. Like doubling the height and width of a one-page picture so that it'll print two pages wide and two pages high.




* StarOffice, OpenOffice, LibreOffice, etc.
 
Wow, this thread makes me miss Linux. Used it and BSD for 10 years exclusively as my desktop. Switched to Windows 7 to finish school and (sadly) never went back :(
 

zoku88

Member
Anyone use Lubuntu?

I think it looks like a contender to replace operating systems on Netbooks.

I don't use it, but speaking of netbooks, I recently replaced Ubuntu (which is a distro I'm not particularly fond of) with Arch Linux and am liking it so far.

I'm not sure how I feel of it compared to my current main distro, Gentoo. I'm debating whether my next computer will have Arch or Gentoo.
 

Pctx

Banned
Anyone use Lubuntu?

I think it looks like a contender to replace operating systems on Netbooks.

I use it on my VM ESXi host for a GUI Linux when the other sysadmin needs a UI. 12.04 is a lot more polished than previous versions. Anything in particular you're wondering about?
 

clav

Member
Not at the moment. I just installed it on my netbook, and the Broadcom wireless card seems to have some trouble holding a steady connection on wireless repeaters.

Ideally, I should switch to Atheros, but I don't feel like spending money.

There is a BIOS update, but Dell only made it compatible for Windows unless there's a way to extract the BIOS directly. There only appears to be a "Start" button in the BIOS flasher, so I'll probably have to throw in a copy of Windows some other time.

Why is Chromium only version 18? Seems to defeat the purpose of using it.
 

itxaka

Defeatist
Not at the moment. I just installed it on my netbook, and the Broadcom wireless card seems to have some trouble holding a steady connection on wireless repeaters.

Ideally, I should switch to Atheros, but I don't feel like spending money.

There is a BIOS update, but Dell only made it compatible for Windows unless there's a way to extract the BIOS directly. There only appears to be a "Start" button in the BIOS flasher, so I'll probably have to throw in a copy of Windows some other time.

Why is Chromium only version 18? Seems to defeat the purpose of using it.


Link yo the BIOS?
Aldo, chromium os the open source versión so maybe it doesnt have some propietary code thus being one version behind. Or maybe it has a different versioning?
 

Pctx

Banned
Not at the moment. I just installed it on my netbook, and the Broadcom wireless card seems to have some trouble holding a steady connection on wireless repeaters.

Ideally, I should switch to Atheros, but I don't feel like spending money.

There is a BIOS update, but Dell only made it compatible for Windows unless there's a way to extract the BIOS directly. There only appears to be a "Start" button in the BIOS flasher, so I'll probably have to throw in a copy of Windows some other time.

Why is Chromium only version 18? Seems to defeat the purpose of using it.

Wireless drivers are funny on Linux since they seem to be so hit and miss with certain distros. Ubuntu is generally good as a catch all but sometimes you have to get creative. As far as the BIOS update, that's a good question on extracting it. Don't have an answer to that one.
 

zoku88

Member
Link yo the BIOS?
Aldo, chromium os the open source versión so maybe it doesnt have some propietary code thus being one version behind. Or maybe it has a different versioning?

Chromium and chrome use the same version numbering, AFAIK.

It's just that probably the Ubuntu repos (or whatever Lubuntu uses) holds back some packages until there is a new release of the distro.
 
Q

qizah

Unconfirmed Member
This is a pretty good resource for beginners to read through, at least, it's helped me get even more familiar with the environment - and I've been using Ubuntu for about a year and a half now.
 

Vanillalite

Ask me about the GAF Notebook
Probably should post this in the Android thread, but funk it. I'm posting here. Trying to get the android software development stuff up and running in Ubuntu 12.04.

I decided to get more up to date versions of things so I downloaded Eclipse 4.2 from the web verses grabbing it from the Software Center. I also installed OpenJava 7.

I got the link to add the repository from Google for Ecplise 4.2 from Google Developer Docs and the link worked.

I installed a few things to get things going, but I got a couple errors. When I go back to see if things installed correctly though it acts like everything is installed.

So I moved onto Android SDK Manager and selected all I wanted from that too. Everything seemed to install fine there as well except for one Motorola deal where it wanted me to login so I just skipped it.

Problem comes in now when I try to load up the Android Virtual Device manager. I can click to add new device, create a name, select what version of Android I want out of what I downloaded from the SDK Manager, and set how much space to give the SDK Card. Then I'm ready to create, and I hit create AVD. Now Create AVD isn't greyed out or anything, but clicking on it does nothing. I'm sort of at a loss in this regard on how to get that working.

Also besides getting that working is there anything else I really need to do or am I go to go to start coding away for Android in Ubuntu? I tried to create an Android project and it wanted to download another plugin it thought it was missing and I let it do that. Then it seemed to create an Android project. My main issue is just getting the dang AVD to work at this point.
 

zoku88

Member
When I had Ubuntu a while ago, I'm pretty sure that it was trivial to get the SDK and the eclipse stuff working.

What errors did you get (the ones where you looked and everything looked 'ok'?) I might be able to guess what went wrong, although I would be doing things from memory.

EDIT: or were the things you were doing at that step irrelevant?
 

Vanillalite

Ask me about the GAF Notebook
When I had Ubuntu a while ago, I'm pretty sure that it was trivial to get the SDK and the eclipse stuff working.

What errors did you get (the ones where you looked and everything looked 'ok'?) I might be able to guess what went wrong, although I would be doing things from memory.

EDIT: or were the things you were doing at that step irrelevant?

IDK I tried this before about a year or so ago and no problems.

Now for some reason I just can't create any Android Virtual Devices. I mean I can click new and try and create one. Yet nothing happens when I click create. I can spam the create button all I want, and nada.
 

Vanillalite

Ask me about the GAF Notebook
So we finally got official non back channel confirmation direct from Valve about their Linux on goings. It actually matches all of the back channel info we were getting even if it was from sources that have been iffy in the past.

Valve has always liked Linux. They use Ubuntu servers in their office, and they allow Linux Servers to host for Valve games. Some of these people said "hey we like doing all of this so lets continue!" That meant trying to port all of Steam over which means they also had a to port a game to see if everything was working. That was L4D2 as previously reported.

I know a lot of uber FOSS nerds will balk at this cause it's not really open source despite Valve trying to be really transparent about their doings even if it's not open source in the traditional sense. Also a lot of people will balk cause it's Ubuntu only despite that being the logical step of targeting one specific distro to wok on the port verses everything all at once. That leaves them with only a couple logical choices and they went with the no brainer using Ubuntu and targeting the latest LTS release (perfect timing for Valve).

Now it'll be interesting to see how the game performs. We all know Nvidia drivers are crap and ATI drivers are like the devil's spawn. The Open Source Nvidia drivers seem to be better than the Open Source ATI ones. Will be intersting to see if Valve could actually get the Nouveau drivers to run better than the official Nvidia ones since the Nouveau drivers are open source for Valve to fuck with. Would be a big triumph in terms of the FOSS ideal if that were to happen.

At anyrate some of us on Linux-GAF have to be happy and/or excited.

PS: Is it presumptuous of me to be considered the |OT| Ubuntu whore and the person that should get 1st dibs on the Steam Client for Ubuntu thread on the |Gaming| Side of things? :p
 
Ubuntu only? That catches Mint as well, so I'd say it makes sense -- though I hope a more complete Linux release (at least hit Fedora as well, Suse can bite me) will be forthcoming.

Now it'll be interesting to see how the game performs. We all know Nvidia drivers are crap and ATI drivers are like the devil's spawn. The Open Source Nvidia drivers seem to be better than the Open Source ATI ones. Will be intersting to see if Valve could actually get the Nouveau drivers to run better than the official Nvidia ones since the Nouveau drivers are open source for Valve to fuck with. Would be a big triumph in terms of the FOSS ideal if that were to happen.
Nvidia's drivers are by no means crap, they're just feature-incomplete and proprietary as eff.

ATI's drivers... now those fit the definition of crap. Also, the bolded might be the understatement of the year. ATI's closed source drivers are awful, but the open source drivers might as well not exist.
 

zoku88

Member
Ubuntu only? That catches Mint as well, so I'd say it makes sense -- though I hope a more complete Linux release (at least hit Fedora as well, Suse can bite me) will be forthcoming.

Hopefully a tarball will come >.>

Well, I guess there's nothing stopping someone from extracting the package from the deb file...
 

angelfly

Member
I didn't plan on using it but announcing it as Ubuntu only just seems kind of silly. Limiting themselves to only one distro won't do them any favors. That said this is probably the case of Canonical reaching to Valve and offering to work with them. I don't see any other distro bothering to do that (why should they?) so it's probably why Ubuntu is the only currently supported one.
 

thcsquad

Member
I didn't plan on using it but announcing it as Ubuntu only just seems kind of silly. Limiting themselves to only one distro won't do them any favors. That said this is probably the case of Canonical reaching to Valve and offering to work with them. I don't see any other distro bothering to do that (why should they?) so it's probably why Ubuntu is the only currently supported one.

They stated pretty cleanly and definitively why Ubuntu is the only supported distro for now, and the reason certainly doesn't seem silly. It's just common sense for a software engineering project.

Why Ubuntu? There are a couple of reasons for that. First, we’re just starting development and working with a single distribution is critical when you are experimenting, as we are. It reduces the variability of the testing space and makes early iteration easier and faster. Secondly, Ubuntu is a popular distribution and has recognition with the general gaming and developer communities. This doesn’t mean that Ubuntu will be the only distribution we support. Based on the success of our efforts around Ubuntu, we will look at supporting other distributions in the future.
 

zoku88

Member
They stated pretty cleanly and definitively why Ubuntu is the only supported distro for now, and the reason certainly doesn't seem silly. It's just common sense for a software engineering project.

I think he's saying it's silly, because... you normally don't target specific distros....

Like, if something works in Ubuntu, it should work in every distro, because every distro is linux based. (otherwise, Linux would be a real mess.)

That's why you can like, download firefox and run it.

Actually, if there was a case where it worked in Ubuntu, but not in other distros, I would say that there is something wrong with it.

Question what's different if steam gets packaged for distro A vs distro B? Doesn't apt-get work for every distro?
Apt is for debian based distros. Fedora and Suse use yum. Arch uses pacman. Gentoo uses Portage. etc
 

zoku88

Member
My jail broken apple tv uses apt-get and so does Ubuntu is it because I've installed apt-get services first?

Ubuntu is debian based. As for apple tv, idk.

I mean, as far as I know, there's nothing stopping anyone from installing an alternate package management system on their computer, but.... I would think it would be kinda messy...
 
I think he's saying it's silly, because... you normally don't target specific distros....

Like, if something works in Ubuntu, it should work in every distro, because every distro is linux based. (otherwise, Linux would be a real mess.)

That's why you can like, download firefox and run it.

Actually, if there was a case where it worked in Ubuntu, but not in other distros, I would say that there is something wrong with it.


Apt is for debian based distros. Fedora and Suse use yum. Arch uses pacman. Gentoo uses Portage. etc
I'm curious, have you done distribution of linux binaries before? Based on how you wrote that post, I get the sense you haven't...

The difficulty isn't packaging (which is work, but it's just work), it's dynamic libraries. You have to either:

(1) create and test a separate build for each platform you support (e.g. ubuntu 12.04, ubuntu 11.10, ubuntu 11.04, rhel6, rhel5... everywhere the library versions are different), linking against the libraries directly (what shows when you call ldd)

or

(2) write some really clever libdl code to "find some version of that library I care about, any version!" so you don't dynamically link at all, but load libraries at runtime... and then test that on all your supported platforms too.

In general, the reason linux development works with so many different distros (essentially, different permutations applications of libraries) is that with everything being open source, distros are in charge of compilation, distribution, and testing. Introducing a closed source binary means you need to handle all of that for yourself, and handle all of it for every platform you support.
 

zoku88

Member
I'm curious, have you done distribution of linux binaries before? Based on how you wrote that post, I get the sense you haven't...

The difficulty isn't packaging (which is work, but it's just work), it's dynamic libraries. You have to either:

(1) create and test a separate build for each platform you support (e.g. ubuntu 12.04, ubuntu 11.10, ubuntu 11.04, rhel6, rhel5... everywhere the library versions are different), linking against the libraries directly (what shows when you call ldd)

or

(2) write some really clever libdl code to "find some version of that library I care about, any version!" so you don't dynamically link at all, but load libraries at runtime... and then test that on all your supported platforms too.

In general, the reason linux development works with so many different distros (essentially, different permutations applications of libraries) is that with everything being open source, distros are in charge of compilation, distribution, and testing. Introducing a closed source binary means you need to handle all of that for yourself, and handle all of it for every platform you support.

You are correct that I don't distribute linux binaries. However, I do use binaries from other people (example, occasionally firefox, to test something, desura, fah, etc.) Those aren't distro specific...

As far as libraries go, don't you actually not have to know where they're located? I mean, the search paths would be in ld.conf (or wtv that file is called,) right? I assume that's correct, because when you don't have the libraries, it's usually an ld error (ld: can't open shared object X: file does not exist) or something like that...

As far as making sure the user has the correct libraries to run the program, that's either the user's responsibility or the package manager's (in the distros.)

I mean, for example, if Steam needed a certain version of libjpeg (like, libjpeg.so.6 for whatever reason), what is stopping the user from getting it?

EDIT: I only used libjpeg has an example, since that's the thing I'm usually missing. I guess the library changes a lot, since I always seem to require an older version...

EDIT2: Please correct me if I'm wrong, these are just my impressions of how dynamic loading works.
 

Pctx

Banned
Well I've officially (unofficially because if it was officially I would get yelled at) starting the case study of moving on from Ubuntu to CentOS for web serving. The fact that Cpanel isn't on Debian has pissed me off enough (as well as my web content providers) that we're starting to move that way.

Ironically, I remember using RHL in the past and HATING the RPM's... oddly enough, they are way the hell better than .deb's for what you do.

Only thing that is wonky is where stuff exists. Best example is configuring eth0 for my VM.

Ubuntu: /etc/network/interfaces

CentOS: /etc/sysconfig/network-scripts/ifcfg-eth0

Logically I look at both and I think... "Hmmphh, yeah, yeah I see the logic... but why the gap?" (Meaning why is it so different?) In the reading I've been doing, RHL is a school of thought vastly different from that of Debian. Thankfully since I know vi and navigating a CLI, I can fumble my way through with some web searches and such. As to the "why move?" well as previously mentioned, Cpanel for administration as well as security, rolling updates and to be honest, SELinux is a pretty big step up from bubblegummed security solutions as well as AppArmor which is fine for day to day stuff but actual system kernel level hardening, SELinux wins that race.

Who knows, maybe I'm crazy but I'm finding myself tinkering with Linux more and more every day and each day I:

A) Learn something new
B) Love to find out new ways (or at least 2 ways of doing things)
C) Hate Microsoft and Windows more and more each day
KuGsj.gif
 

zoku88

Member
Well I've officially (unofficially because if it was officially I would get yelled at) starting the case study of moving on from Ubuntu to CentOS for web serving. The fact that Cpanel isn't on Debian has pissed me off enough (as well as my web content providers) that we're starting to move that way.

Ironically, I remember using RHL in the past and HATING the RPM's... oddly enough, they are way the hell better than .deb's for what you do.

Only thing that is wonky is where stuff exists. Best example is configuring eth0 for my VM.

Ubuntu: /etc/network/interfaces

CentOS: /etc/sysconfig/network-scripts/ifcfg-eth0

Logically I look at both and I think... "Hmmphh, yeah, yeah I see the logic... but why the gap?" (Meaning why is it so different?) In the reading I've been doing, RHL is a school of thought vastly different from that of Debian. Thankfully since I know vi and navigating a CLI, I can fumble my way through with some web searches and such. As to the "why move?" well as previously mentioned, Cpanel for administration as well as security, rolling updates and to be honest, SELinux is a pretty big step up from bubblegummed security solutions as well as AppArmor which is fine for day to day stuff but actual system kernel level hardening, SELinux wins that race.

Who knows, maybe I'm crazy but I'm finding myself tinkering with Linux more and more every day and each day I:

A) Learn something new
B) Love to find out new ways (or at least 2 ways of doing things)
C) Hate Microsoft and Windows more and more each day
KuGsj.gif

I've never used a RedHat based distro.

Didn't know CentOS was a rolling distro.

Out of curiosity, what do you mean by:
In the reading I've been doing, RHL is a school of thought vastly different from that of Debian.
 

Massa

Member
Network configuration is a mess on Linux because it's something that comes directly from the kernel guys, so we used to have a new network stack every other year and each distro dealt with it in a different way. If Linus Torvalds was as proficient about usability as he pretends to be when talking about desktops we'd have a simple and standardized setup as in other operating systems like FreeBSD.
 
Network configuration is a mess on Linux because it's something that comes directly from the kernel guys, so we used to have a new network stack every other year and each distro dealt with it in a different way. If Linus Torvalds was as proficient about usability as he pretends to be when talking about desktops we'd have a simple and standardized setup as in other operating systems like FreeBSD.

Bizarrely enough, I never had any trouble with network configuration… until NetworkManager was introduced.

For that matter, PackageKit can go eat a ferret. These relatively recent "camel case" service programs have been nothing but a huge headache for me.
 

Pctx

Banned
I've never used a RedHat based distro.

Didn't know CentOS was a rolling distro.

Out of curiosity, what do you mean by:

Ever since version 5.x it is. Red Hat Linux is a different school of thought from Debian in the fact that they put security front and center and is the paramount purpose of an operating system. Debian, while it addresses security (and doesn't do it poorly mind you) makes some assumptions about the sysadmin setting it up and are done for ease of use. RHL (and in this case CentOS [pronounced Cent-OS]) assumes none of that and is fort knox out of the box.

I install Apache2, MySQL and PHP and I tried getting to my server. Nothing. I can ping it! (YAY!) however, no web page would load. I did some quick google searches and sure enough, IP tables was blocking traffic. Enabled and BOOM, there it'd show up.

Ironically, my "usability" testing is basically to configure and harden CentOS as I did Ubuntu and document it along the way. The learning curve for me is nowhere near as steep as it was 18 months ago as I know a lot more of the directory structure and help commands.

In comparison, CentOS releases their version (in this case, Major version = 6.x.x) and is supported for 10 years. CentOS 7 I believe is due out in 2014 with support then sunsetting in 2024 which is kinda insane when you think about it but stability is key. For those who are curious, I'm basically seeing if the following works/is different on CentOS from Ubuntu:

Apache2 (duh!)
MySQL (duh!)
PHP (duh)
NGINX
OpenSSH (duh)
OpenSSL (duh)
Shorewall Firewall
OSSEC
Cpanel
PHPMyAdmin
and then pretty much differences in system paths between the two. Another example is Apache2 in RHL is called httpd (after the http daemon). As such, I'm used to typing

Code:
sudo service apache2 graceful
In CentOS I simply type:
Code:
service httpd graceful

Anyways, I'm a nerd and I like learning the differences so if you're looking for a challenge, start reading up on web servers and then how to secure them. You'll get to know Linux really, really quick.
KuGsj.gif
 

zoku88

Member
snippety snip

So basically, secure by default and you have to enable unsecure things in order to get stuff working. Rather than having things unsecure and having everything work automatically and you having to go back and close 'open' things?

I actually have a friend who is playing around with a VPS (linode, using Ubuntu, I believe.) He's been having some frustrations with it (with Ubuntu's package management systems.)

Maybe I should advise him to try out CentOS (I think that was one of the options, as well as Gentoo and Arch, which he rejected when I suggested those.)
 

Pctx

Banned
So basically, secure by default and you have to enable unsecure things in order to get stuff working. Rather than having things unsecure and having everything work automatically and you having to go back and close 'open' things?

I actually have a friend who is playing around with a VPS (linode, using Ubuntu, I believe.) He's been having some frustrations with it (with Ubuntu's package management systems.)

Maybe I should advise him to try out CentOS (I think that was one of the options, as well as Gentoo and Arch, which he rejected when I suggested those.)

Correct. This is one of the "pillars" if you will of CentOS and Red Hat that stands out from Debian in terms of security.

Apt-get (or aptitude like I use) is interesting. I've used yum, yast and pacman and it's funny because of all of them, yum wins hands down in flexibility. I use aptitude in Ubuntu because it auto discovers any dependencies (which is what yum does by default) but in terms of package management, it can't touch yum in its options. Also, .deb's vs. RPM's are an interesting thing. In an GUI environment, you browse the web, download the RPM and literally double click the thing and it installs your app. Much like in Ubuntu when it opens it in the software center and installs it. In a CLI environment, it's as simple as:

Code:
rpm -i yourpackagehere.rpm
That's it. I think I was dumbfounded by this as I remember when I worked at Intel with Red Hat back in the early 2000's and I wanted to kill myself when installing RPM's because in those days, it was about "building" the RPM package. Now, thankfully that hard work is mostly all done. Lazy admins? Naw... we're just better than that. :)

To be fair, jumping ship to another Distro that uses another package manager is a tough thing for some and others (more like myself...) less of a big deal. I think that's what so great about Linux. If I had to look at *Nix in terms of drugs....

Ubuntu (Gateway drug) -> Ubu Variants -> Mint -> Fedora -> CentOS -> Arch -> Make your own.

What I find about Ubuntu is that it's comfortable and its my first fling. Flirting with other distros though makes you appreciate the flexibility of Linux a lot more.
 
Top Bottom