'Authentication'에 해당되는 글 3건

  1. 2013.08.27 Using cURL to automate HTTP jobs
  2. 2012.08.27 Released New Tool – Router Password Kracker
  3. 2011.03.19 Radius를 이용한 ssh 인증 서버 구축하기
2013.08.27 18:46

Using cURL to automate HTTP jobs

Date:    Jan 19, 2011
 
                The Art Of Scripting HTTP Requests Using Curl
                =============================================
 
 This document will assume that you're familiar with HTML and general
 networking.
 
 The possibility to write scripts is essential to make a good computer
 system. Unix' capability to be extended by shell scripts and various tools to
 run various automated commands and scripts is one reason why it has succeeded
 so well.
 
 The increasing amount of applications moving to the web has made "HTTP
 Scripting" more frequently requested and wanted. To be able to automatically
 extract information from the web, to fake users, to post or upload data to
 web servers are all important tasks today.
 
 Curl is a command line tool for doing all sorts of URL manipulations and
 transfers, but this particular document will focus on how to use it when
 doing HTTP requests for fun and profit. I'll assume that you know how to
 invoke 'curl --help' or 'curl --manual' to get basic information about it.
 
 Curl is not written to do everything for you. It makes the requests, it gets
 the data, it sends data and it retrieves the information. You probably need
 to glue everything together using some kind of script language or repeated
 manual invokes.
 
1. The HTTP Protocol
 
 HTTP is the protocol used to fetch data from web servers. It is a very simple
 protocol that is built upon TCP/IP. The protocol also allows information to
 get sent to the server from the client using a few different methods, as will
 be shown here.
 
 HTTP is plain ASCII text lines being sent by the client to a server to
 request a particular action, and then the server replies a few text lines
 before the actual requested content is sent to the client.
 
 The client, curl, sends a HTTP request. The request contains a method (like
 GET, POST, HEAD etc), a number of request headers and sometimes a request
 body. The HTTP server responds with a status line (indicating if things went
 well), response headers and most often also a response body. The "body" part
 is the plain data you requested, like the actual HTML or the image etc.
 
 1.1 See the Protocol
 
  Using curl's option --verbose (-v as a short option) will display what kind
  of commands curl sends to the server, as well as a few other informational
  texts.
 
  --verbose is the single most useful option when it comes to debug or even
  understand the curl<->server interaction.
 
  Sometimes even --verbose is not enough. Then --trace and --trace-ascii offer
  even more details as they show EVERYTHING curl sends and receives. Use it
  like this:
 
      curl --trace-ascii debugdump.txt http://www.example.com/
 
2. URL
 
 The Uniform Resource Locator format is how you specify the address of a
 particular resource on the Internet. You know these, you've seen URLs like
 http://curl.haxx.se or https://yourbank.com a million times.
 
3. GET a page
 
 The simplest and most common request/operation made using HTTP is to get a
 URL. The URL could itself refer to a web page, an image or a file. The client
 issues a GET request to the server and receives the document it asked for.
 If you issue the command line
 
        curl http://curl.haxx.se
 
 you get a web page returned in your terminal window. The entire HTML document
 that that URL holds.
 
 All HTTP replies contain a set of response headers that are normally hidden,
 use curl's --include (-i) option to display them as well as the rest of the
 document. You can also ask the remote server for ONLY the headers by using
 the --head (-I) option (which will make curl issue a HEAD request).
 
4. Forms
 
 Forms are the general way a web site can present a HTML page with fields for
 the user to enter data in, and then press some kind of 'OK' or 'submit'
 button to get that data sent to the server. The server then typically uses
 the posted data to decide how to act. Like using the entered words to search
 in a database, or to add the info in a bug track system, display the entered
 address on a map or using the info as a login-prompt verifying that the user
 is allowed to see what it is about to see.
 
 Of course there has to be some kind of program in the server end to receive
 the data you send. You cannot just invent something out of the air.
 
 4.1 GET
 
  A GET-form uses the method GET, as specified in HTML like:
 
        <form method="GET" action="junk.cgi">
          <input type=text name="birthyear">
          <input type=submit name=press value="OK">
        </form>
 
  In your favorite browser, this form will appear with a text box to fill in
  and a press-button labeled "OK". If you fill in '1905' and press the OK
  button, your browser will then create a new URL to get for you. The URL will
  get "junk.cgi?birthyear=1905&press=OK" appended to the path part of the
  previous URL.
 
  If the original form was seen on the page "www.hotmail.com/when/birth.html",
  the second page you'll get will become
  "www.hotmail.com/when/junk.cgi?birthyear=1905&press=OK".
 
  Most search engines work this way.
 
  To make curl do the GET form post for you, just enter the expected created
  URL:
 
        curl "http://www.hotmail.com/when/junk.cgi?birthyear=1905&press=OK"
 
 4.2 POST
 
  The GET method makes all input field names get displayed in the URL field of
  your browser. That's generally a good thing when you want to be able to
  bookmark that page with your given data, but it is an obvious disadvantage
  if you entered secret information in one of the fields or if there are a
  large amount of fields creating a very long and unreadable URL.
 
  The HTTP protocol then offers the POST method. This way the client sends the
  data separated from the URL and thus you won't see any of it in the URL
  address field.
 
  The form would look very similar to the previous one:
 
        <form method="POST" action="junk.cgi">
          <input type=text name="birthyear">
          <input type=submit name=press value=" OK ">
        </form>
 
  And to use curl to post this form with the same data filled in as before, we
  could do it like:
 
        curl --data "birthyear=1905&press=%20OK%20"         http://www.example.com/when.cgi
 
  This kind of POST will use the Content-Type
  application/x-www-form-urlencoded and is the most widely used POST kind.
 
  The data you send to the server MUST already be properly encoded, curl will
  not do that for you. For example, if you want the data to contain a space,
  you need to replace that space with %20 etc. Failing to comply with this
  will most likely cause your data to be received wrongly and messed up.
 
  Recent curl versions can in fact url-encode POST data for you, like this:
 
        curl --data-urlencode "name=I am Daniel" http://www.example.com
 
 4.3 File Upload POST
 
  Back in late 1995 they defined an additional way to post data over HTTP. It
  is documented in the RFC 1867, why this method sometimes is referred to as
  RFC1867-posting.
 
  This method is mainly designed to better support file uploads. A form that
  allows a user to upload a file could be written like this in HTML:
 
    <form method="POST" enctype='multipart/form-data' action="upload.cgi">
      <input type=file name=upload>
      <input type=submit name=press value="OK">
    </form>
 
  This clearly shows that the Content-Type about to be sent is
  multipart/form-data.
 
  To post to a form like this with curl, you enter a command line like:
 
        curl --form upload=@localfilename --form press=OK [URL]
 
 4.4 Hidden Fields
 
  A very common way for HTML based application to pass state information
  between pages is to add hidden fields to the forms. Hidden fields are
  already filled in, they aren't displayed to the user and they get passed
  along just as all the other fields.
 
  A similar example form with one visible field, one hidden field and one
  submit button could look like:
 
    <form method="POST" action="foobar.cgi">
      <input type=text name="birthyear">
      <input type=hidden name="person" value="daniel">
      <input type=submit name="press" value="OK">
    </form>
 
  To post this with curl, you won't have to think about if the fields are
  hidden or not. To curl they're all the same:
 
        curl --data "birthyear=1905&press=OK&person=daniel" [URL]
 
 4.5 Figure Out What A POST Looks Like
 
  When you're about fill in a form and send to a server by using curl instead
  of a browser, you're of course very interested in sending a POST exactly the
  way your browser does.
 
  An easy way to get to see this, is to save the HTML page with the form on
  your local disk, modify the 'method' to a GET, and press the submit button
  (you could also change the action URL if you want to).
 
  You will then clearly see the data get appended to the URL, separated with a
  '?'-letter as GET forms are supposed to.
 
5. PUT
 
 The perhaps best way to upload data to a HTTP server is to use PUT. Then
 again, this of course requires that someone put a program or script on the
 server end that knows how to receive a HTTP PUT stream.
 
 Put a file to a HTTP server with curl:
 
        curl --upload-file uploadfile http://www.example.com/receive.cgi
 
6. HTTP Authentication
 
 HTTP Authentication is the ability to tell the server your username and
 password so that it can verify that you're allowed to do the request you're
 doing. The Basic authentication used in HTTP (which is the type curl uses by
 default) is *plain* *text* based, which means it sends username and password
 only slightly obfuscated, but still fully readable by anyone that sniffs on
 the network between you and the remote server.
 
 To tell curl to use a user and password for authentication:
 
        curl --user name:password http://www.example.com
 
 The site might require a different authentication method (check the headers
 returned by the server), and then --ntlm, --digest, --negotiate or even
 --anyauth might be options that suit you.
 
 Sometimes your HTTP access is only available through the use of a HTTP
 proxy. This seems to be especially common at various companies. A HTTP proxy
 may require its own user and password to allow the client to get through to
 the Internet. To specify those with curl, run something like:
 
        curl --proxy-user proxyuser:proxypassword curl.haxx.se
 
 If your proxy requires the authentication to be done using the NTLM method,
 use --proxy-ntlm, if it requires Digest use --proxy-digest.
 
 If you use any one these user+password options but leave out the password
 part, curl will prompt for the password interactively.
 
 Do note that when a program is run, its parameters might be possible to see
 when listing the running processes of the system. Thus, other users may be
 able to watch your passwords if you pass them as plain command line
 options. There are ways to circumvent this.
 
 It is worth noting that while this is how HTTP Authentication works, very
 many web sites will not use this concept when they provide logins etc. See
 the Web Login chapter further below for more details on that.
 
7. Referer
 
 A HTTP request may include a 'referer' field (yes it is misspelled), which
 can be used to tell from which URL the client got to this particular
 resource. Some programs/scripts check the referer field of requests to verify
 that this wasn't arriving from an external site or an unknown page. While
 this is a stupid way to check something so easily forged, many scripts still
 do it. Using curl, you can put anything you want in the referer-field and
 thus more easily be able to fool the server into serving your request.
 
 Use curl to set the referer field with:
 
        curl --referer http://www.example.come http://www.example.com
 
8. User Agent
 
 Very similar to the referer field, all HTTP requests may set the User-Agent
 field. It names what user agent (client) that is being used. Many
 applications use this information to decide how to display pages. Silly web
 programmers try to make different pages for users of different browsers to
 make them look the best possible for their particular browsers. They usually
 also do different kinds of javascript, vbscript etc.
 
 At times, you will see that getting a page with curl will not return the same
 page that you see when getting the page with your browser. Then you know it
 is time to set the User Agent field to fool the server into thinking you're
 one of those browsers.
 
 To make curl look like Internet Explorer 5 on a Windows 2000 box:
 
  curl --user-agent "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" [URL]
 
 Or why not look like you're using Netscape 4.73 on an old Linux box:
 
  curl --user-agent "Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)" [URL]
 
9. Redirects
 
 When a resource is requested from a server, the reply from the server may
 include a hint about where the browser should go next to find this page, or a
 new page keeping newly generated output. The header that tells the browser
 to redirect is Location:.
 
 Curl does not follow Location: headers by default, but will simply display
 such pages in the same manner it display all HTTP replies. It does however
 feature an option that will make it attempt to follow the Location: pointers.
 
 To tell curl to follow a Location:
 
        curl --location http://www.example.com
 
 If you use curl to POST to a site that immediately redirects you to another
 page, you can safely use --location (-L) and --data/--form together. Curl will
 only use POST in the first request, and then revert to GET in the following
 operations.
 
10. Cookies
 
 The way the web browsers do "client side state control" is by using
 cookies. Cookies are just names with associated contents. The cookies are
 sent to the client by the server. The server tells the client for what path
 and host name it wants the cookie sent back, and it also sends an expiration
 date and a few more properties.
 
 When a client communicates with a server with a name and path as previously
 specified in a received cookie, the client sends back the cookies and their
 contents to the server, unless of course they are expired.
 
 Many applications and servers use this method to connect a series of requests
 into a single logical session. To be able to use curl in such occasions, we
 must be able to record and send back cookies the way the web application
 expects them. The same way browsers deal with them.
 
 The simplest way to send a few cookies to the server when getting a page with
 curl is to add them on the command line like:
 
        curl --cookie "name=Daniel" http://www.example.com
 
 Cookies are sent as common HTTP headers. This is practical as it allows curl
 to record cookies simply by recording headers. Record cookies with curl by
 using the --dump-header (-D) option like:
 
        curl --dump-header headers_and_cookies http://www.example.com
 
 (Take note that the --cookie-jar option described below is a better way to
 store cookies.)
 
 Curl has a full blown cookie parsing engine built-in that comes to use if you
 want to reconnect to a server and use cookies that were stored from a
 previous connection (or handicrafted manually to fool the server into
 believing you had a previous connection). To use previously stored cookies,
 you run curl like:
 
        curl --cookie stored_cookies_in_file http://www.example.com
 
 Curl's "cookie engine" gets enabled when you use the --cookie option. If you
 only want curl to understand received cookies, use --cookie with a file that
 doesn't exist. Example, if you want to let curl understand cookies from a
 page and follow a location (and thus possibly send back cookies it received),
 you can invoke it like:
 
        curl --cookie nada --location http://www.example.com
 
 Curl has the ability to read and write cookie files that use the same file
 format that Netscape and Mozilla do. It is a convenient way to share cookies
 between browsers and automatic scripts. The --cookie (-b) switch
 automatically detects if a given file is such a cookie file and parses it,
 and by using the --cookie-jar (-c) option you'll make curl write a new cookie
 file at the end of an operation:
 
        curl --cookie cookies.txt --cookie-jar newcookies.txt         http://www.example.com
 
11. HTTPS
 
 There are a few ways to do secure HTTP transfers. The by far most common
 protocol for doing this is what is generally known as HTTPS, HTTP over
 SSL. SSL encrypts all the data that is sent and received over the network and
 thus makes it harder for attackers to spy on sensitive information.
 
 SSL (or TLS as the latest version of the standard is called) offers a
 truckload of advanced features to allow all those encryptions and key
 infrastructure mechanisms encrypted HTTP requires.
 
 Curl supports encrypted fetches thanks to the freely available OpenSSL
 libraries. To get a page from a HTTPS server, simply run curl like:
 
        curl https://secure.example.com
 
 11.1 Certificates
 
  In the HTTPS world, you use certificates to validate that you are the one
  you claim to be, as an addition to normal passwords. Curl supports client-
  side certificates. All certificates are locked with a pass phrase, which you
  need to enter before the certificate can be used by curl. The pass phrase
  can be specified on the command line or if not, entered interactively when
  curl queries for it. Use a certificate with curl on a HTTPS server like:
 
        curl --cert mycert.pem https://secure.example.com
 
  curl also tries to verify that the server is who it claims to be, by
  verifying the server's certificate against a locally stored CA cert
  bundle. Failing the verification will cause curl to deny the connection. You
  must then use --insecure (-k) in case you want to tell curl to ignore that
  the server can't be verified.
 
  More about server certificate verification and ca cert bundles can be read
  in the SSLCERTS document, available online here:
 
        http://curl.haxx.se/docs/sslcerts.html
 
12. Custom Request Elements
 
 Doing fancy stuff, you may need to add or change elements of a single curl
 request.
 
 For example, you can change the POST request to a PROPFIND and send the data
 as "Content-Type: text/xml" (instead of the default Content-Type) like this:
 
         curl --data "<xml>" --header "Content-Type: text/xml"               --request PROPFIND url.com
 
 You can delete a default header by providing one without content. Like you
 can ruin the request by chopping off the Host: header:
 
        curl --header "Host:" http://www.example.com
 
 You can add headers the same way. Your server may want a "Destination:"
 header, and you can add it:
 
        curl --header "Destination: http://nowhere" http://example.com
 
13. Web Login
 
 While not strictly just HTTP related, it still cause a lot of people problems
 so here's the executive run-down of how the vast majority of all login forms
 work and how to login to them using curl.
 
 It can also be noted that to do this properly in an automated fashion, you
 will most certainly need to script things and do multiple curl invokes etc.
 
 First, servers mostly use cookies to track the logged-in status of the
 client, so you will need to capture the cookies you receive in the
 responses. Then, many sites also set a special cookie on the login page (to
 make sure you got there through their login page) so you should make a habit
 of first getting the login-form page to capture the cookies set there.
 
 Some web-based login systems features various amounts of javascript, and
 sometimes they use such code to set or modify cookie contents. Possibly they
 do that to prevent programmed logins, like this manual describes how to...
 Anyway, if reading the code isn't enough to let you repeat the behavior
 manually, capturing the HTTP requests done by your browers and analyzing the
 sent cookies is usually a working method to work out how to shortcut the
 javascript need.
 
 In the actual <form> tag for the login, lots of sites fill-in random/session
 or otherwise secretly generated hidden tags and you may need to first capture
 the HTML code for the login form and extract all the hidden fields to be able
 to do a proper login POST. Remember that the contents need to be URL encoded
 when sent in a normal POST.
 
14. Debug
 
 Many times when you run curl on a site, you'll notice that the site doesn't
 seem to respond the same way to your curl requests as it does to your
 browser's.
 
 Then you need to start making your curl requests more similar to your
 browser's requests:
 
 * Use the --trace-ascii option to store fully detailed logs of the requests
   for easier analyzing and better understanding
 
 * Make sure you check for and use cookies when needed (both reading with
   --cookie and writing with --cookie-jar)
 
 * Set user-agent to one like a recent popular browser does
 
 * Set referer like it is set by the browser
 
 * If you use POST, make sure you send all the fields and in the same order as
   the browser does it. (See chapter 4.5 above)
 
 A very good helper to make sure you do this right, is the LiveHTTPHeader tool
 that lets you view all headers you send and receive with Mozilla/Firefox
 (even when using HTTPS).
 
 A more raw approach is to capture the HTTP traffic on the network with tools
 such as ethereal or tcpdump and check what headers that were sent and
 received by the browser. (HTTPS makes this technique inefficient.)
 
15. References
 
 RFC 2616 is a must to read if you want in-depth understanding of the HTTP
 protocol.
 
 RFC 3986 explains the URL syntax.
 
 RFC 2109 defines how cookies are supposed to work.
 
 RFC 1867 defines the HTTP post upload format.
 
 http://curl.haxx.se is the home of the cURL project


출처 : curl.haxx.se



Trackback 1 Comment 0
2012.08.27 19:50

Released New Tool – Router Password Kracker

Here comes our 90th Free Tool – Router Password Kracker. It is free tool to quickly recover lost password from Router, Modem or Website protected with HTTP BASIC Authentication.


It comes with simple and cool GUI interface making it easier for everyone from layman to expert. Also Penetration Testers and Forensic Investigators can find this tool very handy in cracking the Router/Modem/Website password.

It uses simple Dictionary based password recovery technique. For complex passwords, you can easily use tools like CrunchCupp to generate brute-force based or any custom password list file and then use it with ‘Router Password Kracker’.



출처 : nagareshwar.securityxploded.com



Trackback 1 Comment 0
2011.03.19 10:44

Radius를 이용한 ssh 인증 서버 구축하기

마이크로소프트 윈도우환경에서는 Active Directory라는 기술을 통해 기업 IT 환경에서 사용자 관리를 집중화할 수 있다. 그렇다면 유닉스 환경에서는 집중화된 사용자 관리를 어떻게 구현할 수 있을까? 더 나아가 유닉스, MS 윈도우를 같이 사용하는 환경에서 사용자 관리를 집중화할 수 있는 방법은 없을까? 이를 해결하기 위해 NIS/NIS+, LDAP, Kerberos 등 여러가지 솔루션이 존재한다. 본 포스트에서는 RADIUS를 이용한 리눅스 인증 서버 구현하는 방법을 소개하도록 한다. 참고로 이곳을 방문하면 freeRadius가 동작하는 것으로 확인된 리눅스 배포판을 확인할 수 있을 것이다.

1. RADIUS?
RADIUS를 풀어쓰면 Remote Authentication Dial In User Service이며 집중화된 Authentication, Authorization, Accounting (AAA) 관리를 위한 프로토콜이다. RADIUS는 IETF 표준으로서 광범위하게 사용되고 있으며 다양한 어플리케이션에 통합되어 사용될 수 있다. VPN, 방화벽, 무선 AP, RDBMS 등의 많은 어플리케이션들이 RADIUS 인증 방식을 지원하기 때문에 사용자 인증의 집중화가 가능하다. 때문에 IT 관리자로서 다양한 장비와 서비스를 관리할 때 하나의 계정와 비밀번호로 이들 장비들의 관리를 가능케한다.

단순 텍스트 교환방식을 사용하는 텔넷과 달리 암호화된 통신을 사용하는 ssh이 더 안전하다는 사실은 웬만한 IT 관리자에게 더이상 새롭지 않을 것이다. 이 ssh는 PAM이라는 인증 방식을 통해 사용자 인증을 한다. 사실 현대 유닉스는 사용자 인증을 위해 PAM을 이용하기 때문에 인증 방식을 변경하더라도 시스템 내의 인증 방식을 서버스 개별적으로 변경할 필요가 없다.

일반적으로 리눅스에서는 /etc/passwd에 기반한 사용자 데이터베이스를 관리한다. 본 포스트에서 사용하는 인증 모델도 각 클라이언트에 저장되어 있는 /etc/passwd의 id를 기초로하여 사용자가 입력한 비밀번호가 RADIUS 서버에 저장되어 있는 사용자 비밀번호와 동일한지 비교하여 인증하는 방식이다. 때문에 각 클라이언트마다 uid, gid, 홈디렉토리가 다르다. 이를 극복하기 위해서는 LDAP을 이용한 인증 방식을 사용하여야 한다.

또한 각 클라이언트에 RADIUS 인증 모듈인 pam_radius_auth 라이브러리를 설치하고 인증서버의 정보를 설정하여야 한다.

2. FreeRadius를 사용하여 ssh 사용자 인증하기
2.1 FreeRadius 설치하기
FreeRadius는 리눅스뿐 아니라 여러 플랫폼에서 실행되는 GNU GPL v2 라이센스로 배포되며 BSD 라이센스인 PAM 라이브러리와 함께 동작한다. 본 포스트에서 ssh 사용자 인증을 위해 FreeRadius를 이용하는 방법을 알아보도록 한다. 본 포스트에서 사용된 네트워크 환경은 아래와 같다.

우선 RADIUS 서버로 사용할 리눅스 서버를 준비하도록 한다. 본 포스트에서는 우분투 10.04 Lucid Lynx를 사용하였다. RADIUS 서버인 FreeRadius를 설치하도록 하자.

 #apt-get install freeradius

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  freeradius-common freeradius-utils libdbi-perl libfreeradius2 libltdl7
  libnet-daemon-perl libperl5.10 libplrpc-perl perl perl-base perl-modules
Suggested packages:
  freeradius-ldap freeradius-postgresql freeradius-mysql freeradius-krb5
  dbishell perl-doc libterm-readline-gnu-perl libterm-readline-perl-perl
The following NEW packages will be installed:
  freeradius freeradius-common freeradius-utils libdbi-perl libfreeradius2
  libltdl7 libnet-daemon-perl libperl5.10 libplrpc-perl
The following packages will be upgraded:
  perl perl-base perl-modules
3 upgraded, 9 newly installed, 0 to remove and 34 not upgraded.
Need to get 11.0MB of archives.
After this operation, 8,290kB of additional disk space will be used.
Do you want to continue [Y/n]?

(생략)

Updating default SSL certificate settings, if any...
Adding user freerad to group ssl-cert
Generating DH parameters, 1024 bit long safe prime, generator 2
This is going to take a long time
...............................+.......................+.......+...................+........................................................+............+................+.......+...+...........................................+..................................................................................................................................................+....+.+............+.........+.+.................................+.....+..............+.......+.................................................+.............................................+...................+..............+.............................+..............................................................................................+....................................................................................................................................+........................................................................+........................+....................................+....................+.....................+................................................................................................................................+......................................++*++*++*
 * Starting FreeRADIUS daemon freeradius                                 [ OK ]

Setting up freeradius-utils (2.1.8+dfsg-1ubuntu1) ...
Setting up perl-modules (5.10.1-8ubuntu2) ...
Setting up perl (5.10.1-8ubuntu2) ...

Setting up libnet-daemon-perl (0.43-1) ...
Setting up libplrpc-perl (0.2020-2) ...
Setting up libdbi-perl (1.609-1build1) ...
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
root@UAT:/home/iprize#
 
 
 의존성 검사 후 필요한 패키지들을 내려 받아 설치가 완료되면 자신의 네트워크 환경에 맞게 설정을 수정하여야 한다.

2.2 FreeRadius 설정 수정하기
FreeRadius version 2는 사용이 용이하도록 만들어졌다고 하지만 처음 접하는 이들에게는 그다지 쉽지않다. 문서나 관련 자료들이 많이 부족하기 때문이다.

FreeRadius 서버에서 수정할 사항은 RADIUS 인증을 받으려는 클라이언트에 대한 네트워크 대역과 공유키(secret)를  지정하는 것으로 마무리된다. /etc/freeradius/ 디렉토리의 clients.conf를 열어 아래 설정을 추가하도록 하자.

client 192.168.111.0/24 {

        secret          = radius_auth
        shortname       = private-network
}
 

위 공유키(secret)을 기억하도록 하자. 서버와 클라이언트의 공유키가 일치하지 않을 경우 인증이 실패하니 주의하도록 하자.

3 클라이언트 설정
3.1 pam-radius-auth 라이브러리 설치
리눅스에서는 사용자 인증을 위해 PAM (Pluggable Authentication Modules)을 통해 사용자 인증을 한다. 우분투 설치후 기본적으로 사용자 인증을 /etc/passwd /etc/shadow의 데이터베이스를 이용한다. 하지만 RADIUS를 설정하게 되면 /etc/passwd에서 사용자 ID만을 취하고 사용자 비밀번호는 인증서버에서 받아와 비교한 후 로긴을 하게 된다. 이를 위해서는 PAM 설정을 변경하여 인증을 RADIUS를 통하도록 해야 한다. 이에 필요한 라이브러리는 libpam-radius-auth이며 RADIUS 인증을 사용하려는 모든 클라이언트에 설치하도록 한다.

 # apt-get install libpam-radius-auth

 
/etc/pam_radius_auth.conf 파일을 수정하여 PAM에게 RADIUS 서버 정보를 알려주도록 한다. 이때 공유키(shared secret)은 clients.conf 설정 파일에 지정하였던 키와 동일한 것을 적어주어야 한다.

 # server[:port] shared_secret      timeout (s)

192.168.111.111 radius_auth       1
 

3.2 ssh 설정 변경하기
위에서 언급했듯이 ssh는 PAM을 이용해 인증을 한다. RADIUS를 이용해 사용자 인증을 하도록/etc/pam.d/sshd에 아래 설정을 추가한다. 단 추가할 때 @include common-auth보다 위쪽에 추가하여야 한다. 아래쪽에 추가할 경우 common-auth의 설정이 먼저 적용이 되어 로컬 사용자 정보를 이용하기 때문이다.

 auth sufficient pam_radius_auth.so

 

3.3 사용자 추가하기
클라이언트 머신에 사용할 계정을 추가하도록 한다. 이때 비밀번호를 지정할 필요는 없다.

#adduser -home /home/iprize iprize

 
 
4. Troubleshooting
 -X옵션을 사용하면 FreeRadius를 사용하면서 문제가 발생할 경우 어디가 잘못되었는지 확인할 수 있도록 디버그 모드로 실행된다. freeradius 서버가 실행되고 있으면 멈추고 -X 옵션을 주고 실행하도록 한다. service 명령어를 이용하여 RADIUS 서버를 실행할 경우 -X옵션을 줄 수 없을 것이다. freeradius 실행 파일은 /usr/sbin/디렉토리 아래에 있으니 해당 디렉토리로 이동하여 -X 옵션을 주어 사용하도록 하자. 아래 메시지는 인증이 성공하여 정상적으로 로긴하였을 때 나타나는 것이다.
 

  rad_recv: Access-Request packet from host 192.168.111.131 port 6767, id=147, length=91

        User-Name = "iprize"
        User-Password = "testing123"
        NAS-IP-Address = 127.0.1.1
        NAS-Identifier = "sshd"
        NAS-Port = 5742
        NAS-Port-Type = Virtual
        Service-Type = Authenticate-Only
        Calling-Station-Id = "192.168.111.1"
+- entering group authorize {...}
++[preprocess] returns ok
++[chap] returns noop
++[mschap] returns noop
[suffix] No '@' in User-Name = "iprize", looking up realm NULL
[suffix] No such realm "NULL"
++[suffix] returns noop
[eap] No EAP-Message, not doing EAP
++[eap] returns noop
++[unix] returns updated
++[files] returns noop
++[expiration] returns noop
++[logintime] returns noop
++[pap] returns updated
Found Auth-Type = PAP
+- entering group PAP {...}
[pap] login attempt with password "testing123"
[pap] Using CRYPT encryption.
[pap] User authenticated successfully
++[pap] returns ok
+- entering group post-auth {...}
++[exec] returns noop
Sending Access-Accept of id 147 to 192.168.111.131 port 6767
Finished request 0.
Going to the next request
Waking up in 4.9 seconds.
Cleaning up request 0 ID 147 with timestamp +25
Ready to process requests.
 
참고자료
1. freeRADIUS - http://freeradius.org/


출처 : iprize.tistory.com

Trackback 1 Comment 0