Donnerstag, 19. Juni 2014

Display HBA and WWN in HP-UX


First display the available HBA's installed in your system:

# ioscan -kfnC fc
Class I  H/W Path     Driver S/W State   H/W Type     Description
====================================================================
fc    0  0/3/0/0/0/0  fclp   CLAIMED     INTERFACE    HP AD355-60001
                         /dev/fclp0
fc    1  0/3/0/0/0/1  fclp   CLAIMED     INTERFACE    HP AD355-60001
                         /dev/fclp1
fc    2  0/7/0/0/0/0  fclp   CLAIMED     INTERFACE    HP AD355-60001
                         /dev/fclp2
fc    3  0/7/0/0/0/1  fclp   CLAIMED     INTERFACE    HP AD355-60001
                         /dev/fclp3

The ioscan command above shows the devices fclp0 to fclp3. Then run the fcmsutil to get the WWN's for each adapter:

# /opt/fcms/bin/fcmsutil /dev/fclp0

Vendor ID is = 0xXXXX
Device ID is = 0xXXXX
PCI Sub-system Vendor ID is = 0xXXXX
PCI Sub-system ID is = 0xXXXX
Chip version = 2
Firmware Version = 2.70X5 SLI-3 (Z3F2.70X5)
EFI Version = ZE3.21A3
EFI Boot = ENABLED
Driver-Firmware Dump Available = NO
Driver-Firmware Dump Timestamp = N/A
Topology = PTTOPT_FABRIC
Link Speed = 4Gb
Local N_Port_id is = 0xXXXXXX
Previous N_Port_id is = None
N_Port Node World Wide Name = 0x0000000000000000
N_Port Port World Wide Name = 0x0000000000000001
Switch Port World Wide Name = 0x1000000000000000
Switch Node World Wide Name = 0x1000000000000001
Driver state = ONLINE
Hardware Path is = 0/3/0/0/0/0
Maximum Frame Size = 2048
Driver Version = @(#) FCLP: PCIe Fibre Channel driver (FibrChanl-02), B.11.31.0909, Jun  5 2009, FCLP_IFC (3,2)

Samstag, 14. Juni 2014

Load-Generating Tools

One important thing to keep in mind when load-testing is that there are only so many socket connections you can have in Linux. This is a hard-coded kernel limitation, known as the Ephemeral Ports Issue. You can extend it (to some extent) in /etc/sysctl.conf; but basically, a Linux machine can only have about 64,000 sockets open at once. So when load testing, we have to make the most of those sockets by making as many requests as possible over a single connection. In addition to that, we'll need more than one machine to do the load generation. Otherwise, the load generators will run out of available sockets and fail to generate enough load.

Apache Bench

I started with 'ab', Apache Bench. This is the simplest general-use http benchmarking tool that I know of. And it ships with Apache, so it's probably already on your system. Unfortunately, I could only get about 900 requests/sec using this. I've seen other people get up to 2,000 with it, but I could tell right away that 'ab' wasn't the tool for this job.

Httperf

Next, I tried 'httperf'. This tool is more powerful, but still relatively simple and limited in its capabilities. Figuring out how many req/sec you'll be generating is not as straightforward as just passing it a number. It took me several tries to get more than a couple hundred req/sec. For example:

This creates 100,000 sessions, at a rate of 1,000 per second. Each session makes 5 calls, which are spread out by 2 seconds.

httperf --hog --server=192.168.122.10 --wsess=100000,5,2 --rate 1000 --timeout 5
Total: connections 117557 requests 219121 replies 116697 test-duration 111.423 s
Connection rate: 1055.0 conn/s (0.9 ms/conn, <=1022 concurrent connections) Connection time [ms]: min 0.3 avg 865.9 max 7912.5 median 459.5 stddev 993.1 Connection time [ms]: connect 31.1 Connection length [replies/conn]: 1.000
Request rate: 1966.6 req/s (0.5 ms/req) Request size [B]: 91.0
Reply rate [replies/s]: min 59.4 avg 1060.3 max 1639.7 stddev 475.2 (22 samples) Reply time [ms]: response 56.3 transfer 0.0 Reply size [B]: header 267.0 content 18.0 footer 0.0 (total 285.0) Reply status: 1xx=0 2xx=116697 3xx=0 4xx=0 5xx=0
CPU time [s]: user 9.68 system 101.72 (user 8.7% system 91.3% total 100.0%) Net I/O: 467.5 KB/s (3.8*10^6 bps)
Eventually, I was able to get 6,622 connections/sec with these settings:

httperf --hog --server 192.168.122.10 --num-conn 100000 --ra 20000 --timeout 5
(A total of 100,000 connections created, and the connections are created at a fixed rate of 20,000 per second.)
It has potential, and a few more features than 'ab'. But not quite the heavy-lifter that I need for this project. I need something that supports multiple load-testing nodes in a distributed fashion. Hence, my next attempt: Jmeter.

Installing Tsung in CentOS 6.2

The first thing you'll need is the EPEL repository (for Erlang). So set those up before continuing. Once that's done, install the required packages on each of the nodes that you'll be using to generate load. If you don't already have passwordless SSH keys set up between the nodes, do that too.
yum -y install erlang perl perl-RRD-Simple.noarch perl-Log-Log4perl-RRDs.noarch gnuplot perl-Template-Toolkit firefox
Download the latest Tsung from Github, or from their website.
wget http://tsung.erlang-projects.org/dist/tsung-1.4.2.tar.gz
Untar and compile.
tar zxfv  tsung-1.4.2.tar.gz
cd tsung-1.4.2
./configure && make && make install
Copy the example config into ~/.tsung. This is the location of the Tsung config files, and log files.
cp  /usr/share/doc/tsung/examples/http_simple.xml /root/.tsung/tsung.xml
You can edit this file to your specifications, or use the one that works for me. This is my config that, after much trial and error, now generates 5 million http requests per second, when used with 7 distributed nodes.
?
It's a lot to take in at first, but it's really quite simple once you understand it. 
  •  is simply the host(s) to run Tsung on. You can specify IPs, and the max number of CPUs that you want Tsung to use. You can also set a limit on the number of users that the node will simulate with maxusers. Each of these users will perform an operation that we will define later. 
  •  is the name(s) of the [http] server you want to test. We will be using this option to test the cluster IP, as well as individual servers.
  • defines when our simulated users will "arrive" at our website, and how quickly they will arrive.
    •  In phase 1, which lasts 10 minutes, 15,000 users will arrive, at a rate of 8 per second.

    • There are two more arrivalphases, in which users arrive in a similar fashion. 
    • Altogether, these arrivalphases make up a , which controls how many requests per second we'll be generating.
  •  This section defines what those users will be doing once they've arrived at your website.
  • probability allows you to define random things that users might do. Sometimes they may click this, other times they may click that. Probabilities must add up to equal 100%.
  • In the configuration above, the users only ever do one thing, so it has a probability of 100%. 
  •  This is what the users do, 100% of the time. They loop through 10,000,000 times and  a single web page, /test.txt.
  • This looping construct allows us to use less user-connections to achieve a very high number of requests per second.
Once you've got that in place, you can create this handy alias to quickly view your Tsung reports.
vim ~/.bashrc
alias treport="/usr/lib/tsung/bin/tsung_stats.pl; firefox report.html"
source ~/.bashrc
Then start up Tsung.
[root@loadnode1 ~] tsung start
Starting Tsung
"Log directory is: /root/.tsung/log/20120421-1004"
And view the report when finished.
cd /root/.tsung/log/20120421-1004
treport
tsung-thumb

Using Tsung to Plan Your Cluster Build

Now that we have a powerful enough load-testing tool, we can plan the rest of the cluster build:
  1. Use Tsung to test a single http server. Get a base benchmark.
  2. Tune the heck out of those web servers, testing with Tsung regularly to see improvements.
  3. Tune the TCP sockets of those systems to obtain optimal network performance. Again, test, test, test.
  4. Build the LVS cluster, which contains those fully-tuned web servers.
  5. Stress-test LVS by using Tsung on the cluster IP.
In the next two articles, I'll show you how to get your web server performing at top speed, and how to bring it altogether with the LVS cluster software.

Configuring A High Availability Cluster (Heartbeat) On CentOS

Configuring A High Availability Cluster (Heartbeat)


Assign hostname node01 to primary node with IP address 172.16.4.80 to eth0.
Assign hostname node02 to slave node with IP address 172.16.4.81.



Note: on node01
uname -n
must return node01.
On node02
uname -n
must return node02.
172.16.4.82 is the virtual IP address that will be used for our Apache webserver (i.e., Apache will listen on that address).

Configuration

1. Download and install the heartbeat package. In our case we are using CentOS so we will install heartbeat with yum:
yum install heartbeat
or download these packages:
heartbeat-2.08
heartbeat-pils-2.08
heartbeat-stonith-2.08
2. Now we have to configure heartbeat on our two node cluster. We will deal with three files. These are:
authkeys
ha.cf
haresources
3. Now moving to our configuration. But there is one more thing to do, that is to copy these files to the /etc/ha.d directory. In our case we copy these files as given below:
cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
4. Now let's start configuring heartbeat. First we will deal with the authkeys file, we will use authentication method 2 (sha1). For this we will make changes in the authkeys file as below.
vi /etc/ha.d/authkeys
Then add the following lines:
auth 22 sha1 test-ha
Change the permission of the authkeys file:
chmod 600 /etc/ha.d/authkeys
5. Moving to our second file (ha.cf) which is the most important. So edit the ha.cf file with vi:
vi /etc/ha.d/ha.cf
Add the following lines in the ha.cf file:
logfile /var/log/ha-loglogfacility local0keepalive 2deadtime 30initdead 120bcast eth0udpport 694auto_failback onnode node01node node02
Note: node01 and node02 is the output generated by
6. The final piece of work in our configuration is to edit the haresources file. This file contains the information about resources which we want to highly enable. In our case we want the webserver (httpd) highly available:
vi /etc/ha.d/haresources
Add the following line:
node01 172.16.4.82 httpd
7. Copy the /etc/ha.d/ directory from node01 to node02:
scp -r /etc/ha.d/ root@node02:/etc/
8. As we want httpd highly enabled let's start configuring httpd:
vi /etc/httpd/conf/httpd.conf
Add this line in httpd.conf:
Listen 172.16.4.82:80
Copy the /etc/httpd/conf/httpd.conf file to node02:
scp /etc/httpd/conf/httpd.conf root@node02:/etc/httpd/conf/
10. Create the file index.html on both nodes (node01 & node02):
On node01:
echo "node01 apache test server" > /var/www/html/index.html
On node02:
echo "node02 apache test server" > /var/www/html/index.html
11. Now start heartbeat on the primary node01 and slave node02:
/etc/init.d/heartbeat start
12. Open web-browser and type in the URL:
http://172.16.4.82
It will show node01 apache test server.
13. Now stop the hearbeat daemon on node01:
/etc/init.d/heartbeat stop
In your browser type in the URL http://172.16.4.82 and press enter.
It will show node02 apache test server.
14. We don't need to create a virtual network interface and assign an IP address (172.16.4.82) to it. Heartbeat will do this for you, and start the service (httpd) itself. So don't worry about this.
Don't use the IP addresses 172.16.4.80 and 172.16.4.81 for services. These addresses are used by heartbeat for communication between node01 and node02. When any of them will be used for services/resources, it will disturb hearbeat and will not work. 

Dienstag, 26. November 2013

Netcat – a couple of useful examples


One of the Linux command line tools I had initially under-estimated is netcat or just nc. By default, netcat creates a TCP socket either in listening mode (server socket) or a socket that is used in order to connect to a server (client mode). Actually, netcat does not care whether the socket is meant to be a server or a client. All it does is to take the data from stdin and transfer it to the other end across the network.



The simplest example of its usage is to create a server-client chat system. Although this is a very primitive way to chat, it shows how netcat works. In the following examples it is assumed that the machine that creates the listening socket (server) has the 192.168.0.1 IP address. So, create the chat server on this machine and set it to listen to 3333 TCP port:
$ nc -l 3333
On the other end, connect to the server with the following:
$ nc 192.168.0.1 3333
In this case, the keyboard acts as the stdin. Anything you type in the server machine’s terminal is transfered to the client machine and vice-versa.

Transfering Files

In the very same way it can be used to transfer files between two computers. You can create a server that serves the file with the following:
$ cat backup.iso | nc -l 3333
Receive backup.iso on the client machine with the following:
$ nc 192.168.0.1 3333 > backup.iso
As you may have noticed, netcat does not show any info about the progress of the data transfer. This is inconvenient when dealing with large files. In such cases, a pipe-monitoring utility like pv can be used to show a progress indicator. For example, the following shows the total amount of data that has been transfered in real-time on the server side:
Of course, the same can be implemented on the client side by piping netcat’s output through pv:
$ nc 192.168.0.1 3333 | pv -b > backup.iso

Other Examples

Netcat is extremely useful for creating a partition image and sending it to a remote machine on-the-fly:
$ dd if=/dev/hdb5 | gzip -9 | nc -l 3333
On the remote machine, connect to the server and receive the partition image with the following command:
$ nc 192.168.0.1 3333 | pv -b > myhdb5partition.img.gz
This might not be as classy as the partition backups using partimage, but it is efficient.
Another useful thing is to compress the critical files on the server machine with tar and have them pulled by a remote machine:
$ tar -czf - /etc/ | nc -l 3333
As you can see, there is a dash in the tar options instead of a filename. This is because tar’s output needs to be passed to netcat.
On the remote machine, the backup is pulled in the same way as before:
$ nc 192.168.0.1 3333 | pv -b > mybackup.tar.gz

Security

It is obvious that using netcat in the way described above, the data travels in the clear across the network. This is acceptable in case of a local network, but, in case of transfers across the internet, then it would be a wise choice to do it through an SSH tunnel.
Using an SSH tunnel has two advantages:
  1. The data is transfered inside an encrypted tunnel, so it is well-protected.
  2. You do not need to keep any open ports in the firewall configuration of the machine that will act as the server, as the connections will take place through SSH.
You pipe the file to a listening socket on the server machine in the same way as before. It is assumed that an SSH server runs on this machine too.
$ cat backup.iso | nc -l 3333
On the client machine connect to the listening socket through an SSH tunnel:
$ ssh -f -L 23333:127.0.0.1:3333 me@192.168.0.1 sleep 10; \
        nc 127.0.0.1 23333 | pv -b > backup.iso
This way of creating and using the SSH tunnel has the advantage that the tunnel is automagically closed after file transfer finishes. For more information and explanation about it please read my article about auto-closing SSH tunnels.

Telnet-like Usage

Netcat can be used in order to talk to servers like telnet does. For example, in order to get the definition of the word “server” from the “WordNet” database at the dict.org dictionary server, I’d do:
$ nc dict.org 2628
220 ..............some WELCOME.....
DEFINE wn server
150 1 definitions retrieved
151 "server" wn "WordNet (r) 2.0"
server
     n 1: a person whose occupation is to serve at table (as in a
          restaurant) [syn: {waiter}]
     2: (court games) the player who serves to start a point
     3: (computer science) a computer that provides client stations
        with access to files and printers as shared resources to a
        computer network [syn: {host}]
     4: utensil used in serving food or drink
.
250 ok [d/m/c = 1/0/18; 0.000r 0.000u 0.000s]
QUIT
221 bye [d/m/c = 0/0/0; 16.000r 0.000u 0.000s]

Works as a Port Scanner too

A useful command line flag is -z. When it is used, netcat does not initiate a connection to the server, but just informs about the open port it has found. Also, instead of a single port, it can accept a port-range to scan. For example:
$ nc -z 192.168.0.1 80-90
Connection to 192.168.0.1 80 port [tcp/http] succeeded!
In this example, netcat scanned the 80-90 range of ports and reported that port 80 is open on the remote machine.
The man page contains some more interesting examples, so take the time to read it.

Notes

All the above examples have been performed on Fedora 5/6. Netcat syntax may vary slightly among Linux distributions, so read the man page carefully.
Netcat provides a primitive way to transfer data between two networked computers. I wouldn’t say it’s an absolutely necessary tool in the everyday use, but there are times that this primitive functionality is very useful.

Sonntag, 24. November 2013

Mathematica und Wolfram Language laufen auf dem Raspberry Pi

Wolfram Research, die Firma hinter der Such- und Wissensmaschine Wolfram Alphaund der Software Mathematica, hat eine Zusammenarbeit mit der Raspberry Pi Foundation bekanntgegeben. In Zukunft ist Mathematica sowie die noch unfertige Programmiersprache Wolfram Language kostenlos Bestandteil des Raspbian-Betriebssystems.
Mathematica ist eines der am weitesten verbreiteten mathematisch-naturwissenschaftlichen Softwarepakete und kam erstmalig 1988 auf den Markt. Bisher war die Software nicht unter rund 150 Euro für eine Studentenlizenz zu bekommen.

3D-Diagramme in Mathematica auf dem Raspberry Pi.  



















Die erst vor kurzem angekündigte Wolfram Language gibt ihr Debüt auf dem Raspberry Pi. Von Wolfram Research wird sie als Programmiersprache "für die nächste Generation" angekündigt, mit der sich sehr einfach auch komplexe Aufgaben, zum Beispiel Bild- und Sprachverarbeitung erledigen lassen. Dazu bedarf es keiner zusätzlichen Bibliotheken, alle Funktionen sollen Teil des Sprachkerns sein.
Raspbian-User mit mindestens 600 MB freiem Speicher auf der SD-Karte können die Software einfach nachinstallieren:
sudo apt-get update && sudo apt-get install wolfram-engine
Der Firmengründer und Physiker Stephen Wolfram hat große Hoffnungen für die Kollaboration. Denn bereits 1988 wurde Mathematica einmal kostenlos mit einem Computer ausgeliefert: Damals kaufte das CERN Steve Jobs' NeXT-Computer mit der Mathematik-Software – der PC, auf dem Tim Berners-Lee das WWW erfand. (phs)

Donnerstag, 8. November 2012

Windows:Dateien die älter sind als X Tage per Skript / Batch löschen


1 Variante 1: forfiles

Nein, forfiles kannte ich auch noch nicht, scheint aber seit W2003 / XP dabei zu sein:
Forfiles /P E:\Ordner\ /S /M *.* /D -8 /C "cmd /c del /q @path"

/P E:\Ordner               : Pfad auf dem die Suche gestartet werden soll
/S                         : Bitte mit allen Unterordnern
/M *.*                     : Suchmaske - hier alle Dateien (ausser denen ohne Dateiendung)
/D -8                      : Letztes Änderungsdatum älter als 8 Tage zum heutigen Datum
/C "cmd /c del /q @path"   : Befehl der mit diesen Dateien ausgeführt werden soll (hier löschen)


2 Variante 2: robocopy

Per robocopy schummeln wir - wir verschieben alle älteren Dateien in eine neuen Ordner - den wir dann löschen
mkdir E:\TEMP
robocopy.exe E:\Ordner E:\TEMP /E /MOVE /MINAGE:8 /R:1 /W:1
rmdir E:\TEMP /s /q

mkdir E:\TEMP              : Verzeichnis E:\TEMP erstellen

E:\Ordner                  : Quellordner
E:\TEMP                    : Zielordner
/E                         : inklusive Unterverzeichnisse
/MOVE                      : verschieben statt kopieren
/MINAGE:8                  : Mindestalter, Dateien die jünger als 8 Tage sind werden ignoriert
/R:1                       : Bei Fehler 1x noch mal versuchen
/W:1                       : zwischen Wiederholungen 1 Sekunde warten (bei Fehler)

rmdir E:\TEMP /s /q        : Verzeichnis E:\TEMP inklusive Unterverzeichnissen ohne Nachfrage lösche

Mittwoch, 7. November 2012

Admin Password


das mit /active:yes klappt auch nicht im abgesicherten Modus:

Code:
C:\Users\peter> net user Administrator /active:yes
Systemfehler 5 aufgetreten.

Zugriff verweigert

Im abgesicherten Modus kann ich mich auch nur als nicht privilegierter Benutzer anmelden.

Geschafft habe ich es schließlich, indem ich mittels Linux-Tool chntpw den nicht privilegierten Account zum Admin Account zu machen:

Code:
root@sula:~# chntpw -l /mnt/Windows/System32/config/SAM
chntpw version 0.99.6 080526 (sixtyfour), (c) Petter N Hagen
Hive
name (from header): <\SystemRoot\System32\Config\SAM>ROOT KEY at offset: 0x001020 * Subkey indexing type is: 666c
Page at 0x10000 is not 'hbin', assuming file contains garbage at end
File size 262144 [40000] bytes, containing 7 pages (+ 1 headerpage)
Used for data: 262/54048 blocks/bytes, unused: 16/7168 blocks/bytes.


* SAM policy limits:
Failed logins before lockout is: 0
Minimum password length     : 0
Password history count      : 0
| RID -|---------- Username ------------| Admin? |- Lock? --|
| 03e8 | admin                       | ADMIN  | dis/lock |
| 01f4 | Administrator               | ADMIN  | dis/lock |
| 01f5 | Gast                        |     | dis/lock |
| 03e9 | peter                       |     |       |
Geändert mittels chntpw Kommando:

Code:
root@sula:~# chntpw -u peter /mnt/Windows/System32/config/SAM
chntpw version 0.99.6 080526 (sixtyfour), (c) Petter N Hagen
Hive
name (from header): <\SystemRoot\System32\Config\SAM>ROOT KEY at offset: 0x001020 * Subkey indexing type is: 666c
Page at 0x10000 is not 'hbin', assuming file contains garbage at end
File size 262144 [40000] bytes, containing 7 pages (+ 1 headerpage)
Used for data: 262/54048 blocks/bytes, unused: 16/7168 blocks/bytes.


* SAM policy limits:
Failed logins before lockout is: 0
Minimum password length     : 0
Password history count      : 0
| RID -|---------- Username ------------| Admin? |- Lock? --|
| 03e8 | admin                       | ADMIN  | dis/lock |
| 01f4 | Administrator               | ADMIN  | dis/lock |
| 01f5 | Gast                        |     | dis/lock |
| 03e9 | peter                       |     |       |

---------------------> SYSKEY CHECK <----------------------- font="font">
SYSTEM   SecureBoot         : -1 -> Not Set (not installed, good!)
SAM   Account\F          : 0 -> off
SECURITY PolSecretEncryptionKey: -1 -> Not Set (OK if this is NT4)
Syskey not installed!

RID  : 1001 [03e9]
Username: peter
fullname: peter
comment : 
homedir : 

User is member of 1 groups:
00000221 = Benutzer (which has 3 members)

Account bits: 0x0010 =
[ ] Disabled     | [ ] Homedir req. | [ ] Passwd not req. | 
[ ] Temp. duplicate | [X] Normal account  | [ ] NMS account  | 
[ ] Domain trust ac | [ ] Wks trust act.  | [ ] Srv trust act   | 
[ ] Pwd don't expir | [ ] Auto lockout | [ ] (unknown 0x08)  | 
[ ] (unknown 0x10)  | [ ] (unknown 0x20)  | [ ] (unknown 0x40)  | 

Failed login count: 0, while max tries is: 0
Total  login count: 11

- - - - User Edit Menu:
 1 - Clear (blank) user password
 2 - Edit (set new) user password (careful with this on XP or Vista)
 3 - Promote user (make user an administrator)
(4 - Unlock and enable user account) [seems unlocked already]
 q - Quit editing user, back to user select
Select: [q] > 3
NOTE: This function is still experimental, and in some cases it
   may result in stangeness when editing user/group in windows.
   Also, users (like Guest often is) may still be prevented
   from login via security/group policies which is not changed.
Do you still want to promote the user? (y/n) [n] y
User is member of 1 groups.
User was member of groups: 00000221 =Users, 
Deleting user memberships
Adding into only administrators:
Promotion DONE!

Hives that have changed:
 #  Name
 0  
Write hive files? (y/n) [n] : y
 0  
- OKDie Kontrolle sieht gut aus:

Code:
root@sula:~# chntpw -l /mnt/Windows/System32/config/SAM
chntpw version 0.99.6 080526 (sixtyfour), (c) Petter N Hagen
Hive
name (from header): <\SystemRoot\System32\Config\SAM>ROOT KEY at offset: 0x001020 * Subkey indexing type is: 666c
Page at 0x10000 is not 'hbin', assuming file contains garbage at end
File size 262144 [40000] bytes, containing 7 pages (+ 1 headerpage)
Used for data: 263/54064 blocks/bytes, unused: 18/7152 blocks/bytes.


* SAM policy limits:
Failed logins before lockout is: 0
Minimum password length     : 0
Password history count      : 0
| RID -|---------- Username ------------| Admin? |- Lock? --|
| 03e8 | admin                       | ADMIN  | dis/lock |
| 01f4 | Administrator               | ADMIN  | dis/lock |
| 01f5 | Gast                        |     | dis/lock |
| 03e9 | peter                       | ADMIN  |       |
root@sula:~#
Beim Booten konnte ich mich normal als nichtprivilegierter Benutzer anmelden und dann lies sich eine cmd.exe auch als Administrator ohne zusätzliche Passwort-Abfrage starten. Darin konnte ich dann mit "net uset admin xyz" das Passwort setzen.