Elmar Klausmeier's Blog Elmar Klausmeier's Blog Sat, 27 Apr 2024 05:37:13 +0000 https://eklausmeier.goip.de Simplified Saaze https://eklausmeier.goip.de/blog/2024/04-13-installing-and-configuring-the-h2o-web-server https://eklausmeier.goip.de/blog/2024/04-13-installing-and-configuring-the-h2o-web-server Installing and Configuring the H2O Web-Server Sat, 13 Apr 2024 18:45:00 +0200 1. Task at hand. Install H2O web-server on Arch Linux. H2O is a web-server written by Kazuho Oku et al. It supports:

  1. HTTP/1 and HTTP/1.1,
  2. HTTP/2,
  3. HTTP/3 ("QUIC"),
  4. FastCGI, therefore PHP-FPM,
  5. Reverse proxy,
  6. Builtin mruby, though, that crashes.

In benchmarks it ranks at the top constantly. See Web Framework Benchmarks.

Photo

It works way faster than NGINX or Apache. It shines for static web content.

2. Building. The already existing AUR packages for H2O do not work. I.e., they generate a binary which crashes. Below PKGBUILD produces a H2O binary.

pkgname=h2o-master-git
pkgver=1.0
pkgrel=1
arch=('i686' 'x86_64')
pkgdesc="H2O: the optimized HTTP/1.x, HTTP/2, HTTP/3 server"
provides=(h2o)
url="https://h2o.examp1e.net"
source=("git+https://github.com/h2o/h2o.git?commit=master?signed/" h2o.service)
sha256sums=('SKIP' 734e9d045dd5568665762d48e4077208c3da8c68f87510aaa9559d495dd680fd)


build() {
    cd "$srcdir"/h2o
    cmake -DCMAKE_INSTALL_PREFIX=/usr .
    make
}

package() {
    cd "$srcdir"/h2o
    install -Dm 644 LICENSE "$pkgdir"/usr/share/licenses/$pkgname/LICENSE
    install -Dm 644 README.md "$pkgdir"/usr/share/doc/h2o/README.md
    install -Dm 644 "$srcdir"/h2o.service "$pkgdir"/usr/lib/systemd/system/h2o.service
    install -Dm 644 examples/h2o/h2o.conf "$pkgdir/etc/h2o.conf"
    make DESTDIR="$pkgdir" install
}

Compiling on AMD Ryzen 7 5700G, max clock 4.673 GHz, 64 GB RAM, finishes in less than two minutes.

$ time makepkg -f
...
==> Tidying install...
  -> Removing libtool files...
  -> Purging unwanted files...
  -> Removing static library files...
  -> Copying source files needed for debug symbols...
  -> Compressing man and info pages...
==> Checking for packaging issues...
==> Creating package "h2o-master-git"...
  -> Generating .PKGINFO file...
  -> Generating .BUILDINFO file...
  -> Generating .MTREE file...
  -> Compressing package...
==> Leaving fakeroot environment.
==> Finished making: h2o-master-git 1.0-1 (Fri 12 Apr 2024 09:48:36 PM CEST)
        real 92.42s
        user 447.76s
        sys 0
        swapped 0
        total space 0

3. Configuration. Below is a working configuration in file h2o.conf. The configuration accomplishes the following:

  1. it serves http and https,
  2. it compresses via gzip and brotli,
  3. it is started user root, then switches to user http,
  4. Log format is similar to the Hiawatha log-format,
  5. PHP files are handled by php-fpm.

The entire configuration file is a YAML file.

listen: 80
listen: &ssl_listen
  port: 443
  ssl:
    certificate-file:    /etc/letsencrypt/live/eklausmeier.goip.de/fullchain.pem
    key-file:  /etc/letsencrypt/live/eklausmeier.goip.de/privkey.pem
    minimum-version: TLSv1.2
    cipher-preference: server
    cipher-suite: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
    # Oldest compatible clients: Firefox 27, Chrome 30, IE 11 on Windows 7, Edge, Opera 17, Safari 9, Android 5.0, and Java 8
    # see: https://wiki.mozilla.org/Security/Server_Side_TLS

# The following three lines enable HTTP/3
listen:
  <<: *ssl_listen
  type: quic
header.set: "Alt-Svc: h3-25=\":443\""

user: http
#pid-file: /var/run/h2o/h2o.pid
#crash-handler: /usr/local/bin/h2obacktrace
access-log:
  path: /var/log/h2o/access.log
  format: "%h|%{%Y/%m/%d:%T %z}t|%s|%b|%r|%{referer}i|%{user-agent}i|%V:%p|"
error-log: /var/log/h2o/error.log
compress: [ br, gzip ]
#file.dirlisting: ON

file.custom-handler:
  extension: .php
  fastcgi.connect:
    port: /run/php-fpm/php-fpm.sock
    type: unix

hosts:
  0:
    paths:
      /jpilot/favicon.ico:
        file.file: /home/klm/php/saaze-jpilot/public/favicon.ico
      /jpilot/img:
        file.dir: /home/klm/php/saaze-jpilot/public/img
      /jpilot/jpilot.css:
        file.file: /home/klm/php/saaze-jpilot/public/jpilot.css
      /koehntopp/assets:
        file.dir: /home/klm/php/saaze-koehntopp/public/assets
      /koehntopp/jscss:
        file.dir: /home/klm/php/saaze-koehntopp/public/jscss
      /lemire/jscss:
        file.dir: /home/klm/php/saaze-lemire/public/jscss
      /mobility/img:
        file.dir: /home/klm/php/saaze-mobility/public/img
      /nukeklaus/img:
        file.dir: /home/klm/php/saaze-nukeklaus/public/img
      /nukeklaus/jscss:
        file.dir: /home/klm/php/saaze-nukeklaus/public/jscss
      /panorama/img:
        file.dir: /home/klm/php/saaze-panorama/public/img
      /paternoster/paternoster.css:
        file.file: /home/klm/php/saaze-paternoster/public/paternoster.css
      /saaze-example/blogklm.css:
        file.file: /home/klm/php/saaze-example/public/blogklm.css
      /vonhoff/img:
        file.dir: /home/klm/php/saaze-vonhoff/public/img
      /wendt/pagefind:
        file.dir: /home/klm/php/saaze-wendt/public/pagefind
      /:
        file.dir: /srv/http
        redirect:
          status: 301
          internal: YES
          url: /index.php?
      /p:
        mruby.handler: |
          Proc.new do |env|
            [200, {'content-type' => 'text/plain'}, ["Hello world"]]
          end

As already mentioned at the top: mruby doesn't work. Once you access /p the entire web-server crashes.

H2O does not offer URL rewriting out of the box. The above path-configurations operate on prefix match schema. I.e., if the URL in question starts with the string provided, this is considered a match. The string after the match is appended to the part in file.dir.

4. Discussion. While alternatives to Apache and NGINX are highly welcome, the current state of H2O leaves many questions unanswered.

  1. The builtin brotli compression is "stone old": it is seven years behind the official Google Brotli repository, which contains a number of serious fixes.
  2. The builtin mruby software is two years behind, offering mruby version 3.1 instead of 3.3.
  3. mruby crashes once called.
  4. In the hosts part the hostname seems to have no effect.

I tried to replace the old mruby dependency with the current 3.3 version. The build of H2O then failed.

While embodying software packages directly into the H2O GitHub repo makes building the software easier, it risks that the included software rots. That's exactly what is happening here.

Fun fact: I noticed H2O when reading about the LWAN web-server written by L. Pereira. Both, Kazuho Oku and L. Pereira, work at Fastly.

Also see H2O Tutorial.

In case someone wants to analyze why mruby crashes, here is the result of where in gdb:

Core was generated by `h2o -c h2o.conf'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000062085dc9ae9b in mrb_str_hash (mrb=<optimized out>, str=...) at /usr/src/debug/h2o-master-git/h2o/deps/mruby/src/string.c:1673
1673        hval ^= (uint32_t)*bp++;
[Current thread is 1 (Thread 0x7002156006c0 (LWP 18088))]
(gdb) where
#0  0x000062085dc9ae9b in mrb_str_hash (mrb=<optimized out>, str=...) at /usr/src/debug/h2o-master-git/h2o/deps/mruby/src/string.c:1673
#1  0x000062085dc8cb6c in obj_hash_code (h=0x7001d0028660, key=..., mrb=0x1a0) at /usr/src/debug/h2o-master-git/h2o/deps/mruby/src/hash.c:325
#2  ib_it_init (mrb=mrb@entry=0x7001d00015a0, it=it@entry=0x7002155fe550, h=h@entry=0x7001d0028660, key=...) at /usr/src/debug/h2o-master-git/h2o/deps/mruby/src/hash.c:645
#3  0x000062085dc8cd3a in ib_init (ib_byte_size=<optimized out>, ib_bit=<optimized out>, h=0x7001d0028660, mrb=<optimized out>) at /usr/src/debug/h2o-master-git/h2o/deps/mruby/src/hash.c:151
#4  ht_init (mrb=mrb@entry=0x7001d00015a0, h=h@entry=0x7001d0028660, size=size@entry=17, ea=0x7001d0047700, ea_capa=ea_capa@entry=25, ht=ht@entry=0x0, ib_bit=<optimized out>) at /usr/src/debug/h2o-master-git/h2o/deps/mruby/src/hash.c:793
#5  0x000062085dc8d11a in ar_set (mrb=0x7001d00015a0, h=0x7001d0028660, key=..., val=...) at /usr/src/debug/h2o-master-git/h2o/deps/mruby/src/hash.c:536
#6  0x000062085dc8c2e6 in h_set (val=..., key=..., h=0x7001d0028660, mrb=0x7001d00015a0) at /usr/src/debug/h2o-master-git/h2o/deps/mruby/src/hash.c:169
#7  mrb_hash_set (mrb=0x7001d00015a0, hash=..., key=..., val=...) at /usr/src/debug/h2o-master-git/h2o/deps/mruby/src/hash.c:1245
#8  0x000062085dc67938 in iterate_headers_callback (shared_ctx=shared_ctx@entry=0x7001d0001540, pool=pool@entry=0x7001d0076958, header=header@entry=0x7002155fe8d0, cb_data=cb_data@entry=0x7001d0028660) at /usr/src/debug/h2o-master-git/h2o/lib/handler/mruby.c:748
#9  0x000062085dc67e4c in h2o_mruby_iterate_native_headers (shared_ctx=shared_ctx@entry=0x7001d0001540, pool=<optimized out>, headers=<optimized out>, cb=cb@entry=0x62085dc678a0 <iterate_headers_callback>, cb_data=cb_data@entry=0x7001d0028660)
    at /usr/src/debug/h2o-master-git/h2o/lib/handler/mruby.c:727
#10 0x000062085dc6a76e in build_env (generator=0x7001d006cbe0) at /usr/src/debug/h2o-master-git/h2o/lib/handler/mruby.c:836
#11 on_req (_handler=<optimized out>, req=<optimized out>) at /usr/src/debug/h2o-master-git/h2o/lib/handler/mruby.c:974
#12 0x000062085dbc603a in call_handlers (req=0x7001d00765d8, handler=0x62085f2d5ef0) at /usr/src/debug/h2o-master-git/h2o/lib/core/request.c:165
#13 0x000062085dbeeb89 in handle_incoming_request (conn=<optimized out>) at /usr/src/debug/h2o-master-git/h2o/lib/http1.c:714
#14 0x000062085dba6293 in run_socket (sock=0x7001d009b660) at /usr/src/debug/h2o-master-git/h2o/lib/common/socket/evloop.c.h:834
#15 run_pending (loop=loop@entry=0x7001d0000b70) at /usr/src/debug/h2o-master-git/h2o/lib/common/socket/evloop.c.h:876
#16 0x000062085dba6300 in h2o_evloop_run (loop=0x7001d0000b70, max_wait=<optimized out>) at /usr/src/debug/h2o-master-git/h2o/lib/common/socket/evloop.c.h:925
#17 0x000062085dc5da1b in run_loop (_thread_index=<optimized out>) at /usr/src/debug/h2o-master-git/h2o/src/main.c:4210
#18 0x000070022b8a955a in ?? () from /usr/lib/libc.so.6
#19 0x000070022b926a3c in ?? () from /usr/lib/libc.so.6
]]>
https://eklausmeier.goip.de/blog/2024/04-10-location-of-core-files-in-arch-linux https://eklausmeier.goip.de/blog/2024/04-10-location-of-core-files-in-arch-linux Location of core files in Arch Linux Wed, 10 Apr 2024 22:15:00 +0200 In the old UNIX days the core file was written where the offending program was started. The only prerequisite was that there was no limit imposed. Limits can be checked by

$ ulimit -a
-t: cpu time (seconds)              unlimited
-f: file size (blocks)              unlimited
-d: data seg size (kbytes)          unlimited
-s: stack size (kbytes)             8192
-c: core file size (blocks)         unlimited
-m: resident set size (kbytes)      unlimited
-u: processes                       254204
-n: file descriptors                1024
-l: locked-in-memory size (kbytes)  8192
-v: address space (kbytes)          unlimited
-x: file locks                      unlimited
-i: pending signals                 254204
-q: bytes in POSIX msg queues       819200
-e: max nice                        0
-r: max rt priority                 0
-N 15: rt cpu time (microseconds)   unlimited

The line for the "core file size" must be greater than zero.

In Arch Linux that alone doesn't help. core files are written to this directory:

$ coredumpctl info
          PID: 16354 (h2o)
           UID: 33 (http)
           GID: 33 (http)
        Signal: 11 (SEGV)
     Timestamp: Wed 2024-04-10 20:02:12 CEST (2h 3min ago)
  Command Line: h2o
    Executable: /usr/bin/h2o
 Control Group: /user.slice/user-1000.slice/user@1000.service/tmux-spawn-3fc3de1b-6e2d-43bf-ad3d-bf55b4ce3a1a.scope
          Unit: user@1000.service
     User Unit: tmux-spawn-3fc3de1b-6e2d-43bf-ad3d-bf55b4ce3a1a.scope
         Slice: user-1000.slice
     Owner UID: 1000 (klm)
       Boot ID: 8b9d5dcffc3a4669b0c7fa244db334be
    Machine ID: 814e9c58b1e34999a682767020267eb0
      Hostname: chieftec
       Storage: /var/lib/systemd/coredump/core.h2o.33.8b9d5dcffc3a4669b0c7fa244db334be.16354.1712772132000000.zst (inaccessible)
       Message: Process 16354 (h2o) of user 33 dumped core.

                Stack trace of thread 16363:
                #0  0x0000777802fe7bb3 n/a (libcrypto.so.53 + 0xd0bb3)
                #1  0x00007778030efd5b SSL_CTX_flush_sessions (libssl.so.56 + 0x24d5b)
                #2  0x00005d994cc02023 cache_cleanup_thread (h2o + 0x12a023)
                #3  0x0000777802c7755a n/a (libc.so.6 + 0x8b55a)
                #4  0x0000777802cf4a3c n/a (libc.so.6 + 0x108a3c)

The command coredumpctl list enlists the core's so far:

$ coredumpctl list
TIME                           PID UID GID SIG     COREFILE     EXE          SIZE
Sat 2024-04-06 17:55:20 CEST 24746  33  33 SIGSEGV inaccessible /usr/bin/h2o    -
Sat 2024-04-06 18:49:20 CEST 26982  33  33 SIGSEGV inaccessible /usr/bin/h2o    -
Sat 2024-04-06 18:50:04 CEST 27178  33  33 SIGSEGV inaccessible /usr/bin/h2o    -

You can start debugging with coredumpctl debug. That will call gdb.

The location and name of the core file can be changed by tampering with

$ cat /proc/sys/kernel/core_pattern
|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h

More information is here: Core dump file is not generated, coredumpctl, systemd-coredump.

]]>
https://eklausmeier.goip.de/blog/2024/03-31-css-naked-day https://eklausmeier.goip.de/blog/2024/03-31-css-naked-day CSS Naked Day Sun, 31 Mar 2024 12:45:00 +0200 9th April is CSS Naked Day. A day where you do not use CSS on your web-site. In 2024 I participate in this day, i.e., I will deactivate the CSS on this blog.

From the CSS Naked Day:

The idea behind CSS Naked Day is to promote web standards. Plain and simple. This includes proper use of HTML, semantic markup, a good hierarchy structure, and of course, a good old play on words. In the words of 2006, it’s time to show off your <body> for what it really is.

The importance of CSS is illustrated by this humorous tweet:

Photo

1. The 50 hour window

The logic to enable or disable CSS is given by below PHP routine on CSS Naked Day:

<?php
function is_naked_day($d) {
    $start = date('U', mktime(-14, 0, 0, 04, $d, date('Y')));
    $end = date('U', mktime(36, 0, 0, 04, $d, date('Y')));
    $z = date('Z') * -1;
    $now = time() + $z;
    if ( $now >= $start && $now <= $end ) {
        return true;
    }
    return false;
}
?>

Running this with php -a and unixtimestamp.com for the year 2024 gives the following interval:

  1. Start: 08-Apr-2024 12:00 CET
  2. End: 10-Apr-2024 14:00 CET

The rationale is:

CSS Naked Day lasts for one international day. Technically speaking, it will be April 9 somewhere in the world for 50 hours. This is to ensure that everyone’s website will be publicly nude for the entire world to see at any given time during April 9.

2. Required changes in templates

For this blog I use the static site generator Simplified Saaze. All templates of this generator are written in PHP. So deactivating CSS is a pretty simple if statement.

I use the following hierarchy of PHP files for my entry-template, i.e., the template for a blog post:

# entry.php ## top-layout.php ### head.php ## read_cattag_json.php ## Actual content: $entry['content'] ## bottom-layout.php

The following hierarchy is used for the index-template, i.e., the template for showing a reverse-date sorted list of blog posts:

# index.php ## top-layout.php ### head.php ## for-loop over entry-excerpts ## bottom-layout.php

3. Changes in <head> section

File head.php does not contain any CSS. File top-layout.php handles the majority of the HTML <head> section, and the beginning of the <body> section.

I use prism.js for syntax highlighting. This in turn uses CSS, which is surrounded by a simple if:

<?php $NO_CSS = getenv('NO_CSS') ? true : false; ?>
<?php if (isset($entry['prismjs']) && ! $NO_CSS) { ?>
    <link href=/jscss/prism.css rel=stylesheet>
<?php } ?>

If I generate all the static HTML files, I use the environment variable NO_CSS. In case of dynamic generation I simply set $NO_CSS explicitly in top-layout.php, i.e., $NO_CSS=true;.

I have a separate CSS file, called blogklm.css, which I also surround with an if:

<?php if (! $NO_CSS) echo "<style>\n" ?>
<?php if (! $NO_CSS) require SAAZE_PATH . "/public/jscss/blogklm.css" ?>
<?php if (! $NO_CSS) echo "</style>\n" ?>

For galleries and Markmap I had a conditional anyway. This needed an additional clause:

<?php if (!isset($pagination) && ! $NO_CSS) {
    if (isset($entry['gallery_css'])) echo $entry['gallery_css'];
    if (isset($entry['markmap_css'])) echo $entry['markmap_css'];
} ?>

I use Pagefind for searching within this blog. Pagefind in turn needs CSS, which is surrounded by an if:

<?php if (! $NO_CSS) { ?>
<link href="/pagefind/pagefind-ui.css" rel="stylesheet">
<script src="/pagefind/pagefind-ui.js"></script>
<script>
    window.addEventListener('DOMContentLoaded', (event) => {
        new PagefindUI({ element: "#search", showSubResults: true });
    });
</script>
<?php } ?>

4. Changes in <body> section

Still in top-layout.php. Finally, I explicitly mention that I stripped all CSS, so visitors are not surprised to find a new layout:

<?php if ($NO_CSS) echo "<h2> &nbsp; &nbsp; &nbsp; &nbsp; <a href=\"https://css-naked-day.github.io\">April 9 is CSS Naked Day!</a></h2>\n"; ?>

5. History and evolution of CSS Naked Day

Below text is copied from CSS Naked Day and the Missing Wikipedia Page:

The event dates back to 2006, when Dustin Diaz, an American web developer, advertised the first CSS Naked Day in order “to promote web standards.”

During the first two years (2006 and 2007), CSS Naked Day was held on April 5, when in 2008, the date was changed to April 9.

Until 2009, the event was organized by Diaz. From 2010 to 2014, Taylor Satula, an American web designer, ...

From the first CSS Naked Day in 2006, which had 763 recorded participants, engagement went up to 2,160 participants in 2008. After another strong participation in 2009 (1,266 recorded participants), fewer people and sites are documented to have taken part.

In recent years (2020–2023), only a fraction of these participants is known, usually including a few dozen individuals and their sites. While there are no reliable ways to measure participation, it seems clear that while CSS Naked Day is still being observed, that is only the case for a small minority of people in the field. ...

In the months following the 2015 edition, and until today, Basmaison and Meiert have kept maintaining the site and promoting the event together.

The usual omnipresent Wikipedia trolls and naysayer blocked this wiki entry.

]]>
https://eklausmeier.goip.de/blog/2024/03-18-is-binary-compiled-with-frame-pointer-support https://eklausmeier.goip.de/blog/2024/03-18-is-binary-compiled-with-frame-pointer-support Is Binary Compiled with Frame Pointer Support? Mon, 18 Mar 2024 14:00:00 +0100 How can you detect whether a Linux binary was compiled with

gcc -fomit-frame-pointer

Unfortunately the ELF itself does not contain a flag, which tells you that. But looking at the assembler code can give you the answer.

First disassemble the code with

objdump -d

Check the disassembly for below pairs directly after any C function:

push   %rbp
mov    %rsp,%rbp

These are the instructions to set up the frame pointer on 64 bit Linux x86 systems.

Example:

0000000000001380 <zif_md4c_toHtml>:
    1380:       55                      push   %rbp
    1381:       48 89 e5                mov    %rsp,%rbp

A good heuristic is then

objdump -d $binary | grep -c "mov.*%rsp,.*%rbp"

Double check with

objdump -d $binary | grep -C1 "mov.*%rsp,.*%rbp"

This heuristic is not fool proof, as individual C routines can be augmented with

__attribute__((optimize("omit-frame-pointer"))

In the intense debate about making -fno-omit-frame-pointer the default in Fedora, see this comment from L. A. F. Pereira in Python 3.11 performance with frame pointers.

See How can I tell whether a binary is compiled with frame pointers or not on Linux?, which discusses the case for 32 bit x86 Linux systems.

Code with framepointers will always contain the both of the two instructions push %ebp and mov %esp, %ebp. ... For those working with x86_64, the registers to look for are the 64-bit equivalents: %rbp and %rsp - the concept is the same though!

The post The Return of the Frame Pointers by Brendan Gregg triggered this task.

As of today, 18-Mar-2024, Arch Linux still does not ship binaries with frame pointer support. For example:

$ objdump -d /bin/zsh | grep -c "mov.*%rsp,.*%rbp"
10

The PHP binary fails the heuristic:

$ objdump -d /bin/php | grep -c "mov.*%rsp,.*%rbp"
173

But looking at the actuall disassembly shows something like this:

000000000021aff2 <php_info_print_box_end@@Base>:
  21aff2:       f3 0f 1e fa             endbr64
  21aff6:       48 8d 05 43 9b 1e 01    lea    0x11e9b43(%rip),%rax        # 1404b40 <sapi_module@@Base>

I.e., no frame pointer handling.

]]>
https://eklausmeier.goip.de/blog/2024/03-05-chinese-hackers-p2 https://eklausmeier.goip.de/blog/2024/03-05-chinese-hackers-p2 Chinese Hackers #2 Tue, 05 Mar 2024 14:15:00 +0100 In the year 2020 in the blog post Chinese Hackers I noticed that China tries the most to hack my Linux machines. These attempts look like this:

$ lastb
a        ssh:notty    209.97.163.130   Tue Mar  5 13:07 - 13:07  (00:00)
sftpuser ssh:notty    93.123.39.2      Tue Mar  5 13:05 - 13:05  (00:00)
sftpuser ssh:notty    93.123.39.2      Tue Mar  5 13:05 - 13:05  (00:00)
hzp      ssh:notty    43.156.241.167   Mon Mar  4 18:19 - 18:19  (00:00)
hzp      ssh:notty    43.156.241.167   Mon Mar  4 18:19 - 18:19  (00:00)
root     ssh:notty    8.219.249.208    Mon Mar  4 18:17 - 18:17  (00:00)
mheydary ssh:notty    118.178.132.93   Mon Mar  4 12:35 - 12:35  (00:00)
mheydary ssh:notty    118.178.132.93   Mon Mar  4 12:34 - 12:34  (00:00)
ftp1user ssh:notty    143.255.140.241  Mon Mar  4 12:34 - 12:34  (00:00)
ftp1user ssh:notty    143.255.140.241  Mon Mar  4 12:34 - 12:34  (00:00)
panisa   ssh:notty    139.224.200.60   Mon Mar  4 11:13 - 11:13  (00:00)
panisa   ssh:notty    139.224.200.60   Mon Mar  4 11:13 - 11:13  (00:00)
sina     ssh:notty    129.226.158.202  Mon Mar  4 10:45 - 10:45  (00:00)
sina     ssh:notty    129.226.158.202  Mon Mar  4 10:44 - 10:44  (00:00)
hadoop   ssh:notty    129.226.152.121  Mon Mar  4 10:43 - 10:43  (00:00)

In 2020 I used fail2ban. Since 2021 I use SSHGuard. It uses way less resources. See Analysis And Usage of SSHGuard.

I ran a quick analysis which country is the most aggressive penetrator.

1. Collecting IP addresses. SSHGuard filters the offending intruder via ipset.

$ ipset list > i1

This collects all IP addresses.

Now I run these IP numbers through geoiplookup:

$ for i in `perl -ne 'print $1."\n" if /^(\d+\.\d+\.\d+\.\d+)\s+/' i1`; do geoiplookup $i >> i3; done

The resulting list looks like this:

$ head i3
GeoIP Country Edition: CN, China
GeoIP Country Edition: HK, Hong Kong
GeoIP Country Edition: US, United States
GeoIP Country Edition: US, United States
GeoIP Country Edition: KR, Korea, Republic of
GeoIP Country Edition: PE, Peru
GeoIP Country Edition: CA, Canada
GeoIP Country Edition: CN, China
GeoIP Country Edition: KR, Korea, Republic of
GeoIP Country Edition: KE, Kenya

2. Sorting according frequency.

cut -d: -f2 i3 | sort | uniq -c | sort -rn

The top 20 offenders are:

   4228  CN, China
   3175  US, United States
   2142  SG, Singapore
   1596  KR, Korea, Republic of
   1042  DE, Germany
    980  IN, India
    755  HK, Hong Kong
    661  BR, Brazil
    566  RU, Russian Federation
    522  VN, Vietnam
    471  ID, Indonesia
    453  JP, Japan
    403  FR, France
    396  NL, Netherlands
    354  GB, United Kingdom
    313  IR, Iran, Islamic Republic of
    307  CA, Canada
    279  TW, Taiwan
    236  AU, Australia
    173  TH, Thailand

Graphically this looks like this:

]]>
https://eklausmeier.goip.de/blog/2024/03-02-installing-ibm-cobol-for-linux-on-arch-linux-p2 https://eklausmeier.goip.de/blog/2024/03-02-installing-ibm-cobol-for-linux-on-arch-linux-p2 Installing IBM COBOL for Linux on Arch Linux #2 Sat, 02 Mar 2024 14:15:00 +0100 I tried to install IBM COBOL for Linux multiple times. I tried to install it on Arch Linux, which is the Linux I use:

  1. Installing IBM COBOL for Linux on Arch Linux in 2021
  2. Testing COBOLworx gcc-cobol #2 in 2023

Initially I succeeded in installing the IBM compiler in 2021. The IBM compiler compared very favorably against the GNU Cobol compiler, see Comparing GnuCOBOL to IBM COBOL. But in 2023 this installation procedure failed. So, no IBM COBOL on Arch Linux.

Richard Nelson from IBM contacted me today and mentioned that IBM COBOL should also run on Arch Linux. So I tried to install the latest version 1.2.0.2 again. Version 1.2 is particularly appealing as it supports 64 bit. IBM COBOL compilers were notorious with lacking 64 bit support, see Memory Limitations with IBM Enterprise COBOL Compiler.

My current Arch Linux setup is as given in below table.

Type Version
Linux 6.7.6-arch1-2 #1 SMP PREEMPT_DYNAMIC x86_64 GNU/Linux
gcc gcc version 13.2.1 20230801 (GCC)
glibc 2.39-1
gcc-libs 13.2.1-5

1. Download. Software package is here: IBM COBOL for Linux on x86. IBM now uses this annoying two-factor authorization procedure, click through all these hoops. This 2FA makes it essentially impossible to write an AUR package, which downloads the IBM file within the PKGBUILD.

The file in question is IBM_COBOL_V1.2.0_LINUX_EVAL.x86-64.240110.tar.gz. Its size is 116 MB.

$ tar ztvf IBM_COBOL_V1.2.0_LINUX_EVAL.x86-64.240110.tar.gz
drwxr-sr-x root/root         0 2023-06-06 01:05 images/
drwxr-sr-x root/root         0 2024-01-10 16:16 images/rhel/
-rw-rw-r-- root/root  26210268 2024-01-10 16:16 images/rhel/cobol.rte.1.2.0-1.2.0.2-231215.x86_64.rpm
-rw-rw-r-- root/root   2331592 2024-01-10 16:16 images/rhel/cobol.dbg.1.2.0-1.2.0.2-231215.x86_64.rpm
-rw-rw-r-- root/root   3055224 2024-01-10 16:16 images/rhel/cobol.cmp.license-eval.1.2.0-1.2.0.2-231215.x86_64.rpm
-rw-rw-r-- root/root  11199076 2024-01-10 16:16 images/rhel/cobol.cmp.1.2.0-1.2.0.2-231215.x86_64.rpm
drwxr-sr-x root/root         0 2024-01-10 16:17 images/sles/
-rw-r--r-- root/root  22295780 2024-01-10 16:17 images/sles/cobol.rte.1.2.0-1.2.0.2-231215.x86_64.rpm
-rw-r--r-- root/root   1975984 2024-01-10 16:17 images/sles/cobol.dbg.1.2.0-1.2.0.2-231215.x86_64.rpm
-rw-r--r-- root/root   2999760 2024-01-10 16:17 images/sles/cobol.cmp.license-eval.1.2.0-1.2.0.2-231215.x86_64.rpm
-rw-r--r-- root/root   9095804 2024-01-10 16:17 images/sles/cobol.cmp.1.2.0-1.2.0.2-231215.x86_64.rpm
drwxr-sr-x root/root         0 2024-01-10 16:17 images/ubuntu/
-rw-r--r-- root/root   1957512 2024-01-10 16:17 images/ubuntu/cobol.dbg.1.2.0_1.2.0.2-231215_amd64.deb
-rw-r--r-- root/root   2992220 2024-01-10 16:17 images/ubuntu/cobol.cmp.license-eval.1.2.0_1.2.0.2-231215_amd64.deb
-rw-r--r-- root/root  10125300 2024-01-10 16:17 images/ubuntu/cobol.cmp.1.2.0_1.2.0.2-231215_amd64.deb
-rw-r--r-- root/root  22514248 2024-01-10 16:17 images/ubuntu/cobol.rte.1.2.0_1.2.0.2-231215_amd64.deb
-rwxr-xr-x root/root      6763 2024-01-10 16:32 install
-rw-r--r-- root/root    820691 2023-06-06 01:05 install.pdf
-rwxr-xr-x root/root   2694559 2023-06-06 01:12 LicenseAgreement.pdf
-rwxr-xr-x root/root    285651 2023-06-06 01:12 LicenseInformation.pdf
-rwxr-xr-x root/root     57001 2023-06-06 01:12 notices
-rw-r--r-- root/root    311858 2023-06-06 01:14 quickstart.fr_FR.pdf
-rw-r--r-- root/root    311477 2023-06-06 01:14 quickstart.ja_JP.pdf
-rw-r--r-- root/root    281309 2023-06-06 01:14 quickstart.pdf
-rwxr-xr-x root/root      2932 2023-06-06 01:12 README

2. Unpacking the Ubuntu part. We will extract the Ubuntu part, highlighted above.

$ tar zxf IBM_COBOL_V1.2.0_LINUX_EVAL.x86-64.240110.tar.gz images/ubuntu/

Change to images/ubuntu directory and run the below loop, which first unpacks the deb-files with ar, then unpacks the resulting tar.xz data file with tar Jx:

for i in *.deb; do ar xf $i; tar Jxf data.tar.xz; done

This creates a subdirectory opt with 188 entries.

Move the resulting opt or opt/ibm to the "real" /opt and chown -R root:root all the files.

Installation size is 135 MB.

3. Checking the installation. See, whether all libraries are in place.

$ ldd /opt/ibm/cobol/1.2.0/bin/cob2
        linux-vdso.so.1 (0x00007ffebab8a000)
        librt.so.1 => /usr/lib/librt.so.1 (0x000070366a43e000)
        libdl.so.2 => /usr/lib/libdl.so.2 (0x000070366a439000)
        libpthread.so.0 => /usr/lib/libpthread.so.0 (0x000070366a434000)
        libc.so.6 => /usr/lib/libc.so.6 (0x000070366a252000)
        /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x000070366a47a000)

$ ldd /opt/ibm/cobol/1.2.0/bin/cob3
        not a dynamic executable

$ ldd cob3_64
        linux-vdso.so.1 (0x00007ffeddbf9000)
        librt.so.1 => /usr/lib/librt.so.1 (0x00007a35119f5000)
        libdl.so.2 => /usr/lib/libdl.so.2 (0x00007a35119f0000)
        libicuuc_64r.so => /opt/ibm/cobol/1.2.0/usr/bin/./../../../rte/usr/lib/libicuuc_64r.so (0x00007a3510600000)
        libcob2_64r.so => /opt/ibm/cobol/1.2.0/usr/bin/./../../../rte/usr/lib/libcob2_64r.so (0x00007a3510000000)
        libm.so.6 => /usr/lib/libm.so.6 (0x00007a3511904000)
        libc.so.6 => /usr/lib/libc.so.6 (0x00007a3511720000)
        libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007a350fc00000)
        libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007a35116fb000)
        libicudata_64r.so => /opt/ibm/cobol/1.2.0/usr/bin/./../../../rte/usr/lib/libicudata_64r.so (0x00007a350da00000)
        libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007a35116f6000)
        /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007a3511a31000)
        libicui18n_64r.so => /opt/ibm/cobol/1.2.0/usr/bin/./../../../rte/usr/lib/libicui18n_64r.so (0x00007a350d200000)
        libdfp_64r.so => /opt/ibm/cobol/1.2.0/usr/bin/./../../../rte/usr/lib/libdfp_64r.so (0x00007a350ca00000)

For convenience add the bin-directory to the PATH:

$ export PATH=$PATH:/opt/ibm/cobol/1.2.0/bin

Up to this point, running the compiler would report a license problem. The actual compiler is cob2.

Here is an example, once the license is setup correctly:

$ cob2 hello1.cob
IBM COBOL for Linux 1.2.0 compile started
End of compilation 1,  program HELLO1,  no statements flagged.

4. Getting a 60 day trial license. Richard Nelson sent me a new file libxlcmpev_64r.so. With this new library file the compiler works flawlessly.

$ license_check
Evaluation (Trial/Eval/TnB) license
Current date    Sat, 02 Mar 2024 17:54:00 GMT
Activation date Thu, 29 Feb 2024 00:00:01 GMT
Expire date     Mon, 29 Apr 2024 23:59:59 GMT
Days left       58

Thanks Richard!

Also, Richard mentioned the install shell script in the original tar file, see line 18. I didn't make use of that! My fault. Once I knew that this libxlcmpev_64r.so is problematic, and looking at the install script:

...
extendTrial="$reldir/cobol/$version/usr/bin/xlcmp xlcbl && rm $reldir/cobol/$version/usr/bin/xlcmp"
eval $extendTrial
...

Generating the license now goes like this, as user root:

/opt/ibm/cobol/1.2.0/usr/bin/xlcmp xlcbl

This generates a new 1.2.0/usr/lib/libxlcmpev_64r.so. This provides a valid 60 day license.

$ license_check
Evaluation (Trial/Eval/TnB) license
Current date    Sat, 02 Mar 2024 18:14:24 GMT
Activation date Sat, 02 Mar 2024 00:00:01 GMT
Expire date     Wed, 01 May 2024 23:59:59 GMT
Days left       60
]]>
https://eklausmeier.goip.de/blog/2024/02-27-parallelizing-simplified-saaze-output https://eklausmeier.goip.de/blog/2024/02-27-parallelizing-simplified-saaze-output Parallelizing the Output of Simplified Saaze Tue, 27 Feb 2024 08:00:00 +0100 This blog uses Simplified Saaze as its static site generator. Generating all 561 HTML pages takes 0.25 seconds. The environment used is as in below table.

Type Value
CPU AMD Ryzen 7 5700G
RAM 64 GB
OS Arch Linux 6.7.6-arch1-1 #1 SMP PREEMPT_DYNAMIC
PHP PHP 8.3.3 (cli)
PHP with JIT PHP 8.3.3 (cli), Zend Engine v4.3.3 with Zend OPcache v8.3.3
Simplified Saaze 2.0

1. Runtimes in serial mode. In the following we use PHP with no JIT. So far runtimes for this very blog are as below:

$ time php saaze -mortb /tmp/build
Building static site in /tmp/build...
    execute(): filePath=./content/aux.yml, nSIentries=7, totalPages=1, entries_per_page=20
    execute(): filePath=./content/blog.yml, nSIentries=452, totalPages=23, entries_per_page=20
    execute(): filePath=./content/gallery.yml, nSIentries=7, totalPages=1, entries_per_page=20
    execute(): filePath=./content/music.yml, nSIentries=69, totalPages=4, entries_per_page=20
    execute(): filePath=./content/error.yml, nSIentries=0, totalPages=0, entries_per_page=20
Finished creating 5 collections, 4 with index, and 561 entries (0.25 secs / 24.46MB)
#collections=5, parseEntry=0.0103/563-5, md2html=0.0201, MathParser=0.0141/561, renderEntry=0.1573/561, renderCollection=0.0058/33, content=561/0, excerpt=0/0
    real 0.28s
    user 0.16s
    sys 0
    swapped 0
    total space 0

It can be seen that the renderEntry() function uses 0.1573 seconds from overall 0.25 seconds, i.e., more than 60%. These 561 calls will now be parallelized. The rest stays serial.

For the Lemire blog we have:

$ time php saaze -rb /tmp/buildLemire
Building static site in /tmp/buildLemire...
        execute(): filePath=/home/klm/php/saaze-lemire/content/blog.yml, nSIentries=2771, totalPages=139, entries_per_page=20
Finished creating 1 collections, 1 with index, and 4483 entries (1.01 secs / 97.18MB)
#collections=1, parseEntry=0.0702/4483-1, md2html=0.1003, MathParser=0.0594/4483, renderEntry=0.4121/4483, renderCollection=0.0225/140, content=4483/0, excerpt=0/0
        real 1.03s
        user 0.64s
        sys 0
        swapped 0
        total space 0

In this case the output template processing is 0.4121 seconds from overall 1.01 seconds, that's 40%. This shows that the Lemire templates are easier. No wonder, they do not use categories and tags, and many other gimmicks, which I used in this blog. But still, 40% of the runtime is spent on output rendering.

In Performance Comparison Saaze vs. Hugo vs. Zola I wrote:

It would be quite easy to use threads in Saaze, i.e., so-called entries and the chunks of collections could easily be processed in parallel.

It is even easier to parallelize the generation of the output files when the PHP templating is in place. We will see that parallelizing can be done in less than 20 lines of PHP code.

2. Runtimes in serial mode with JIT enabled. Below are the runtime with JIT and OPCache enabled for PHP.

time php saaze -mortb /tmp/build
Building static site in /tmp/build...
        execute(): filePath=./content/aux.yml, nSIentries=7, totalPages=1, entries_per_page=20
        execute(): filePath=./content/blog.yml, nSIentries=453, totalPages=23, entries_per_page=20
        execute(): filePath=./content/gallery.yml, nSIentries=7, totalPages=1, entries_per_page=20
        execute(): filePath=./content/music.yml, nSIentries=69, totalPages=4, entries_per_page=20
        execute(): filePath=./content/error.yml, nSIentries=0, totalPages=0, entries_per_page=20
Finished creating 5 collections, 4 with index, and 562 entries (0.16 secs / 20.36MB)
#collections=5, parseEntry=0.0104/564-5, md2html=0.0219, MathParser=0.0203/562, renderEntry=0.0521/562, renderCollection=0.0022/33, content=562/0, excerpt=0/0
        real 0.19s
        user 0.11s
        sys 0
        swapped 0
        total space 0

The previous massive renderEntry() part in runtime shrank from 0.1573 seconds to 0.0521 seconds. I think this is mainly due to the OPCache, which now avoids recompiling and reparsing the PHP output template.

For the Lemire blog with JIT enabled we have:

time php saaze -rb /tmp/buildLemire
Building static site in /tmp/buildLemire...
        execute(): filePath=/home/klm/php/saaze-lemire/content/blog.yml, nSIentries=2771, totalPages=139, entries_per_page=20
Finished creating 1 collections, 1 with index, and 4483 entries (0.62 secs / 96.24MB)
#collections=1, parseEntry=0.0655/4483-1, md2html=0.0974, MathParser=0.0586/4483, renderEntry=0.0707/4483, renderCollection=0.0110/140, content=4483/0, excerpt=0/0
        real 0.65s
        user 0.40s
        sys 0
        swapped 0
        total space 0

Similar picture to the above: the renderEntry() part dropped from 0.4121 seconds to 0.0707 seconds. That's massive.

3. Unix forks in PHP. As a preliminary introduction to pcntl_fork() in PHP, look at below simple PHP code.

<?php
    for ($i=1; $i<=4; ++$i) {
        if (($pid = pcntl_fork())) {
            printf("i=%d, pid=%d\n",$i,$pid);
            sleep(1);
            exit(0);
        }

Running this script:

$ php forktst.php
i=1, pid=15082
i=2, pid=15083
i=3, pid=15084
i=4, pid=15085

The fork and join method of parallelization is easy to use, but it has the disadvantage that communicating results from the children to the parent is "difficult". Communicating data from the parent to its children is "easy": everything is copied over.

4. Implementation in BuildCommand.php. The command-line version of Simplified Saaze calls buildAllStatic(). This routine iterates through all collections, and for each collection it iterates through all entries.

  1. Function getEntries() reads Markdown files into memory and converts them to HTML by using MD4C, all in memory
  2. Function buildEntry() uses the entry in question and writes the HTML to disk by processing it through our PHP templates.

PHP function buildEntry() is essentially:

private function buildEntry(Collection $collection, Entry $entry, string $dest) : void {
    ...
    file_put_contents($entryDir, $this->templateManager->renderEntry($entry);
}

buildEntry() is now encapsulated within beginParallel() and endParallel(). That's it.

foreach ($collections as $collection) {
    $entries    = $collection->getEntries();	# finally calls getContentAndExcerpt() and sorts
    $nentries   = count($entries);
    $nSIentries = count($collection->entriesSansIndex);
    $entries_per_page = $collection->data['entries_per_page'] ?? \Saaze\Config::$H['global_config_entries_per_page'];
    $totalPages = ceil($nSIentries / $entries_per_page);
    printf("\texecute(): filePath=%s, nSIentries=%d, totalPages=%d, entries_per_page=%d\n",$collection->filePath,$nSIentries,$totalPages,$entries_per_page);

    $this->beginParallel($nentries,$aprocs);
    $i = 0;
    foreach ($entries as $entry) {
        if ($this->nprocs > 0  &&  ($i++ % $this->nprocs) != $this->procnr) continue;	// distribute work among nprocs processes
        if ($entry->data['entry'] ?? true) {
            $this->buildEntry($collection, $entry, $dest);
            $entryCount++;
        }
    }
    $this->endParallel();

    if ($tags) {	// populate cat_and_tag[][] array
        foreach ($entries as $entry) {
            if ($entry->data['entry'] ?? true)
                $this->build_cat_and_tag($entry,$collection->draftOverride);
        }
    }

    ++$totalCollection;
    if ($this->buildCollectionIndex($collection, 0, $dest)) $collectionCount++;

    for ($page=1; $page <= $totalPages; $page++)
        $this->buildCollectionIndex($collection, $page, $dest);
}

The two PHP functions for fork and join are thus:

protected function beginParallel(int $nentries, int $aprocs) : void {
    $this->pid = 0;
    $this->procnr = 0;
    $this->nprocs = 1;
    if ($nentries < 128) return;	// too few entries to warrant forking
    $this->nprocs = $aprocs;	// aprocs = allowed procs, specified on commmand-line
    for ($this->procnr=0; $this->procnr<$this->nprocs; ++$this->procnr)
        if (($this->pid = pcntl_fork())) return;	// child returns to work
}

protected function endParallel() : void {
    if ($this->pid) exit(0);	// exit child process; pid=0 is parent
}

This fork and join via pcntl_fork() does not work on Microsoft Windows.

5. Benchmarking. How much of an improvement do we get by this? For this very blog with 561 entries, the runtimes can be more than halved. This is in line with the 60% runtime used by the output template processing. It should be noted that this blog is comprised of five collections:

  1. aux: 7 entries
  2. blog: 452 entries, only these are parallelized!
  3. gallery: 7 entries
  4. music: 69 entries
  5. error: 1 entry

The parallelization kicks in only for at least 128 entries. I.e., only the blog-part is parallelized, the music-part and the other parts are not.

Another benchmark is the Lemire blog converted to Simplified Saaze, see Example Theme for Simplified Saaze: Lemire.

Command-lines are:

time php saaze -p16 -mortb /tmp/build
time php saaze -p16 -rb /tmp/buildLemire

Then we are varying the parameter -p. All output is to /tmp, which is a RAM disk in Arch Linux. Obviously, I do not want to measure disk read or write speed. I want to measure the processing speed of Simplified Saaze.

Timings are from time, taking real time.

Blog entries p=1 p=2 p=4 p=8 p=16
561 posts / this blog 0.28 0.18 0.16 0.13 0.12
561 posts with JIT 0.19 0.17 0.14 0.13 0.12
4.483 posts in Lemire 1.03 1.02 0.65 0.54 0.52
4.483 posts with JIT 0.65 0.64 0.53 0.47 0.46

Overall, with just 20 lines of PHP we can halve the runtime. For JIT enabled, the drop in runtime is not so pronounced, but also almost halved.

The very good performance of JIT, which we can see here, is in line with the findings in Phoronix: PHP 8.0 JIT Is Offering Very Compelling Performance Ahead Of Its Alpha.

]]>
https://eklausmeier.goip.de/blog/2024/02-25-github-rss-atom-feeds https://eklausmeier.goip.de/blog/2024/02-25-github-rss-atom-feeds GitHub RSS Atom Feeds Sun, 25 Feb 2024 17:45:00 +0100 Ronalds Vilcins, in his article RSS feeds for your Github releases, tags and activity, provides a handy overview of some GitHub RSS feeds. I reproduce them here verbatim:

Type URL
Releases https://github.com/:owner/:repo/releases.atom
Commits https://github.com/:owner/:repo/commits.atom
Private feed https://github.com/:user.private.atom?token=:secret
Tags https://github.com/:user/:repo/tags.atom
User activity https://github.com/:user.atom

They are vaguely documented by GitHub here: Get feeds.

For example, for my saaze GitHub repository the feed for the commits is:

<feed xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xml:lang="en-US">
  <id>tag:github.com,2008:/eklausme/saaze/commits/master</id>
  <link type="text/html" rel="alternate" href="https://github.com/eklausme/saaze/commits/master"/>
  <link type="application/atom+xml" rel="self" href="https://github.com/eklausme/saaze/commits/master.atom"/>
  <title>Recent Commits to saaze:master</title>
  <updated>2024-02-17T12:58:12Z</updated>
  <entry>
    <id>tag:github.com,2008:Grit::Commit/48560c8bb5535cfaacdf2fc1be153c43448051d5</id>
    <link type="text/html" rel="alternate" href="https://github.com/eklausme/saaze/commit/48560c8bb5535cfaacdf2fc1be153c43448051d5"/>
    <title>
        Reduced CPU overhead in composer
    </title>
    <updated>2024-02-17T12:58:12Z</updated>
    <media:thumbnail height="30" width="30" url="https://avatars.githubusercontent.com/u/1020520?s=30&amp;v=4"/>
    <author>
      <name>eklausme</name>
      <uri>https://github.com/eklausme</uri>
    </author>
    <content type="html">
      &lt;pre style=&#39;white-space:pre-wrap;width:81ex&#39;&gt;Reduced CPU overhead in composer&lt;/pre&gt;
    </content>
  </entry>
  ...
</feed>

The above output was produced by below command-line:

curl https://github.com/eklausme/saaze/commits.atom
]]>
https://eklausmeier.goip.de/blog/2024/02-24-md4c-php-extension https://eklausmeier.goip.de/blog/2024/02-24-md4c-php-extension MD4C PHP Extension Sat, 24 Feb 2024 22:45:00 +0100 This blog uses MD4C to convert Markdown to HTML. So far I used PHP:FFI to link PHP with the MD4C C library. PHP:FFI is "Foreign Function Interface" in PHP and allows to call C functions from PHP without writing a PHP extension. Using FFI is very easy.

Previous profiling measurements with XHProf and PHPSPY indicated that the handling of the return value from MD4C via FFI::String takes some time. So I changed FFI to a "real" PHP extension. I measured again. Result: No difference between FFI and PHP extension. So the profiling measurements were misleading.

Also the following claim in the PHP manual is downright false:

it makes no sense to use the FFI extension for speed; however, it may make sense to use it to reduce memory consumption.

Nevertheless, writing a PHP extension was a good exercise.

Literature on writing PHP extension are here:

  1. Sara Golemon: Extending and Embedding PHP, Sams Publishing, 2006, xx+410 p.
  2. PHP Internals: Zend extensions
  3. https://github.com/dstogov/php-extension

The PHP extension code is in GitHub: php-md4c.

1. Walk through the C code. For this simple extension there is no need for a separate header file. The extension starts with basic includes for PHP, for the phpinfo(), and for MD4C:

// MD4C extension for PHP: Markdown to HTML conversion

#ifdef HAVE_CONFIG_H
#include "config.h"
#endif

#include <php.h>
#include <ext/standard/info.h>
#include <md4c-html.h>

The following code is directly from the FFI part php_md4c_toHtml.c:

struct membuffer {
    char* data;
    size_t asize;	// allocated size = max usable size
    size_t size;	// current size
};

The following routines are also almost the same as in the FFI case, except that memory allocation is using safe_pemalloc() instead of native malloc(). In our case this doesn't make any difference.

static void membuf_init(struct membuffer* buf, MD_SIZE new_asize) {
    buf->size = 0;
    buf->asize = new_asize;
    if ((buf->data = safe_pemalloc(buf->asize,sizeof(char),0,1)) == NULL)
        php_error_docref(NULL, E_ERROR, "php-md4c.c: membuf_init: safe_pemalloc() failed with asize=%ld.\n",(long)buf->asize);
}

Next routine uses safe_perealloc() instead of realloc().

static void membuf_grow(struct membuffer* buf, size_t new_asize) {
    buf->data = safe_perealloc(buf->data, sizeof(char*), new_asize, 0, 1);
    if (buf->data == NULL)
        php_error_docref(NULL, E_ERROR, "php-md4c.c: membuf_grow: realloc() failed, new_asize=%ld.\n",(long)new_asize);
    buf->asize = new_asize;
}

The rest is identical to FFI.

static void membuf_append(struct membuffer* buf, const char* data, MD_SIZE size) {
    if (buf->asize < buf->size + size)
        membuf_grow(buf, buf->size + buf->size / 2 + size);
    memcpy(buf->data + buf->size, data, size);
    buf->size += size;
}

static void process_output(const MD_CHAR* text, MD_SIZE size, void* userdata) {
    membuf_append((struct membuffer*) userdata, text, size);
}

static struct membuffer mbuf = { NULL, 0, 0 };

Now we come to something PHP specific. We encapsulate the C function into PHP_FUNCTION. Furthermore the arguments of the routine are parsed with ZEND_PARSE_PARAMETERS_START(1, 2). This routine must have at least one argument. It might have an optional second argument. That is what is meant by (1,2). The return string is allocated via estrndup(). In the FFI case we just return a pointer to a string.

/* {{{ string md4c_toHtml( string $markdown, [ int $flag ] )
 */
PHP_FUNCTION(md4c_toHtml) {	// return HTML string
    char *markdown;
    size_t markdown_len;
    int ret;
    long flag = MD_DIALECT_GITHUB | MD_FLAG_NOINDENTEDCODEBLOCKS;

    ZEND_PARSE_PARAMETERS_START(1, 2)
        Z_PARAM_STRING(markdown, markdown_len)
        Z_PARAM_OPTIONAL Z_PARAM_LONG(flag)
    ZEND_PARSE_PARAMETERS_END();

    if (mbuf.asize == 0) membuf_init(&mbuf,16777216);	// =16MB

    mbuf.size = 0;	// prepare for next call
    ret = md_html(markdown, markdown_len, process_output,
        &mbuf, (MD_SIZE)flag, 0);
    membuf_append(&mbuf,"\0",1); // make it a null-terminated C string, so PHP can deduce length
    if (ret < 0) {
        RETVAL_STRINGL("<br>- - - Error in Markdown - - -<br>\n",sizeof("<br>- - - Error in Markdown - - -<br>\n"));
    } else {
        RETVAL_STRING(estrndup(mbuf.data,mbuf.size));
    }
}
/* }}}*/

The following two PHP extension specific functions are just for initialization and shutdown. The following diagram from PHP internals shows the sequence of initialization and shutdown.

Init: Do nothing.

/* {{{ PHP_MINIT_FUNCTION
 */
PHP_MINIT_FUNCTION(md4c) {	// module initialization
    //REGISTER_INI_ENTRIES();
    //php_printf("In PHP_MINIT_FUNCTION(md4c): module initialization\n");

    return SUCCESS;
}
/* }}} */

Shutdown: Do nothing.

/* {{{ PHP_MSHUTDOWN_FUNCTION
 */
PHP_MSHUTDOWN_FUNCTION(md4c) {	// module shutdown
    if (mbuf.data) pefree(mbuf.data,1);
    return SUCCESS;
}
/* }}} */

The following function prints out information when called via phpinfo().

/* {{{ PHP_MINFO_FUNCTION
 */
PHP_MINFO_FUNCTION(md4c) {
    php_info_print_table_start();
    php_info_print_table_row(2, "MD4C", "enabled");
    php_info_print_table_row(2, "PHP-MD4C version", "1.0");
    php_info_print_table_row(2, "MD4C version", "0.5.2");
    php_info_print_table_end();
}
/* }}} */

The output looks like this:

Below describes the argument list.

/* {{{ arginfo
 */
ZEND_BEGIN_ARG_INFO(arginfo_md4c_test, 0)
ZEND_END_ARG_INFO()

ZEND_BEGIN_ARG_INFO(arginfo_md4c_toHtml, 1)
    ZEND_ARG_INFO(0, str)
    ZEND_ARG_INFO_WITH_DEFAULT_VALUE(0, flag, "MD_DIALECT_GITHUB | MD_FLAG_NOINDENTEDCODEBLOCKS")
ZEND_END_ARG_INFO()
/* }}} */

/* {{{ php_md4c_functions[]
 */
static const zend_function_entry php_md4c_functions[] = {
    PHP_FE(md4c_toHtml,	arginfo_md4c_toHtml)
    PHP_FE_END
};
/* }}} */

The zend_module_entry is somewhat classical. All the above is configured here.

/* {{{ md4c_module_entry
 */
zend_module_entry md4c_module_entry = {
    STANDARD_MODULE_HEADER,
    "md4c",						// Extension name
    php_md4c_functions,			// zend_function_entry
    NULL,	//PHP_MINIT(md4c),	// PHP_MINIT - Module initialization
    PHP_MSHUTDOWN(md4c),		// PHP_MSHUTDOWN - Module shutdown
    NULL,						// PHP_RINIT - Request initialization
    NULL,						// PHP_RSHUTDOWN - Request shutdown
    PHP_MINFO(md4c),			// PHP_MINFO - Module info
    "1.0",						// Version
    STANDARD_MODULE_PROPERTIES
};
/* }}} */

This seemingly innocent looking statement is important: Without it you will get PHP Startup: Unable to load dynamic library.

#ifdef COMPILE_DL_TEST
# ifdef ZTS
ZEND_TSRMLS_CACHE_DEFINE()
# endif
#endif
ZEND_GET_MODULE(md4c)

2. M4 config file. PHP extension require a config.m4 file.

dnl config.m4 for php-md4c extension

PHP_ARG_WITH(md4c, [whether to enable MD4C support],
[  --with-md4c[[=DIR]]       Enable MD4C support.
                          DIR is the path to MD4C install prefix])

if test "$PHP_YAML" != "no"; then

    AC_MSG_CHECKING([for md4c headers])
    for i in "$PHP_MD4C" "$prefix" /usr /usr/local; do
        if test -r "$i/include/md4c-html.h"; then
            PHP_MD4C_DIR=$i
            AC_MSG_RESULT([found in $i])
            break
        fi
    done
    if test -z "$PHP_MD4C_DIR"; then
        AC_MSG_RESULT([not found])
        AC_MSG_ERROR([Please install md4c])
    fi

    PHP_ADD_INCLUDE($PHP_MD4C_DIR/include)
    dnl recommended flags for compilation with gcc
    dnl CFLAGS="$CFLAGS -Wall -fno-strict-aliasing"

    export OLD_CPPFLAGS="$CPPFLAGS"
    export CPPFLAGS="$CPPFLAGS $INCLUDES -DHAVE_MD4C"
    AC_CHECK_HEADERS([md4c.h md4c-html.h], [], AC_MSG_ERROR(['md4c.h' header not found]))
    #AC_CHECK_HEADER([md4c-html.h], [], AC_MSG_ERROR(['md4c-html.h' header not found]))
    PHP_SUBST(MD4C_SHARED_LIBADD)

    PHP_ADD_LIBRARY_WITH_PATH(md4c, $PHP_MD4C_DIR/$PHP_LIBDIR, MD4C_SHARED_LIBADD)
    PHP_ADD_LIBRARY_WITH_PATH(md4c-html, $PHP_MD4C_DIR/$PHP_LIBDIR, MD4C_SHARED_LIBADD)
    export CPPFLAGS="$OLD_CPPFLAGS"

    PHP_SUBST(MD4C_SHARED_LIBADD)
    AC_DEFINE(HAVE_MD4C, 1, [ ])
    PHP_NEW_EXTENSION(md4c, md4c.c, $ext_shared)
fi

3. Compiling. Run

phpize
./configure
make

Symbols are as follows:

$ nm md4c.so
0000000000002160 r arginfo_md4c_test
0000000000003d00 d arginfo_md4c_toHtml
                 w __cxa_finalize@GLIBC_2.2.5
00000000000040a0 d __dso_handle
0000000000003dc0 d _DYNAMIC
                 U _emalloc
                 U _emalloc_64
                 U _estrndup
00000000000016c8 t _fini
                 U free@GLIBC_2.2.5
00000000000016c0 T get_module
0000000000003fe8 d _GLOBAL_OFFSET_TABLE_
                 w __gmon_start__
00000000000021c8 r __GNU_EH_FRAME_HDR
0000000000001000 t _init
                 w _ITM_deregisterTMCloneTable
                 w _ITM_registerTMCloneTable
0000000000004180 b mbuf
00000000000040c0 D md4c_module_entry
                 U md_html
                 U memcpy@GLIBC_2.14
                 U php_error_docref
                 U php_info_print_table_end
                 U php_info_print_table_row
                 U php_info_print_table_start
0000000000003d60 d php_md4c_functions
                 U php_printf
0000000000001640 t process_output
0000000000001234 t process_output.cold
                 U _safe_malloc
                 U _safe_realloc
                 U __stack_chk_fail@GLIBC_2.4
                 U strlen@GLIBC_2.2.5
0000000000004168 d __TMC_END__
                 U zend_parse_arg_long_slow
                 U zend_parse_arg_str_slow
                 U zend_wrong_parameter_error
                 U zend_wrong_parameters_count_error
                 U zend_wrong_parameters_none_error
. . .
0000000000001380 T zif_md4c_toHtml
00000000000011cf t zif_md4c_toHtml.cold
0000000000001175 T zm_info_md4c
0000000000001350 T zm_shutdown_md4c
00000000000016b0 T zm_startup_md4c

4. Installing on Arch Linux. Copy the md4c.so library to /usr/lib/php/modules as root:

cp modules/md4c.so /usr/lib/php/modules

Finally activate the extension in php.ini:

extension=md4c

5. Notes on Windows. On Linux we use the installed MD4C library. As noted in Installing Simplified Saaze on Windows 10 #2 it is advisable to amalgamate all MD4C source files into a single file for easier compilation.

]]>
https://eklausmeier.goip.de/blog/2024/02-19-letsencrypt-certbot-usage-with-nginx https://eklausmeier.goip.de/blog/2024/02-19-letsencrypt-certbot-usage-with-nginx Let's Encrypt Certbot Usage with NGINX Mon, 19 Feb 2024 16:35:00 +0100 Previously I used lefh to generate and update Let's Encrypt certificates for the Hiawatha webserver. Unfortunately, this PHP script no longer works. Therefore I installed certbot:

pacman -S certbot-nginx

Updating my domains is like this:

certbot --nginx -d eklausmeier.goip.de,klm.ddns.net,eklausmeier.mywire.org,klmport.no-ip.org,klm.no-ip.org

Its output is roughly

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Requesting a certificate for eklausmeier.goip.de and 4 more domains

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/eklausmeier.goip.de/fullchain.pem
Key is saved at:         /etc/letsencrypt/live/eklausmeier.goip.de/privkey.pem
This certificate expires on 2024-05-19.
These files will be updated when the certificate renews.

Add the first two files in /etc/nginx/nginx.conf:

ssl_certificate      /etc/letsencrypt/live/eklausmeier.goip.de/fullchain.pem;
ssl_certificate_key  /etc/letsencrypt/live/eklausmeier.goip.de/privkey.pem;

Check with nginx -t. If all is OK, then restart with systemctl restart nginx.

Final check is with Qualys SSL Labs:

Photo

]]>
https://eklausmeier.goip.de/blog/2024/02-11-considerations-on-a-newsletter-program https://eklausmeier.goip.de/blog/2024/02-11-considerations-on-a-newsletter-program Considerations on a Newsletter Program Sun, 11 Feb 2024 17:40:00 +0100 1. Statement of the problem. This blog does not offer any newsletter functionality. If a reader is interested to know whether I have posted new content, he must either use an RSS feed or directly visit this site. WordPress offers the possibility of getting notified of new posts automatically. I.e., a user can easily subscribe for new content.

On my old WordPress blog, https://eklausmeier.wordpress.com, I had 79 subscribers. From their e-mail names, I would suspect that some of them were not really interested in my actual content but were a little bit spammy. Nevertheless, many seemed to be legitimate.

There are a lot of professional newsletter services on the market. For example:

  1. https://www.mailjet.com
  2. https://buttondown.email
  3. https://mailchimp.com
  4. https://omnisend.com

There are many more.

These solutions should be differentiated from mailing list software.

2. Data model. Initially, I thought of a single file used to store all information. Something like this: Handling of subscription-file: Read into a PHP hash table, change whatever needs change, and if there is a change required, e.g., new subscriber, then move the old file, and write a new file from the hash.

However, this file needs some protection using flock() to guard against simultaneous writing to it. After some thought it seems more advantageous to use a simple SQLite file, i.e., a database, which already handles concurrency out of the box.

A single database table suffices. Henceforth this table is called subscription.

Nr. Column type nullable Example or meaning
1 email text not null primary key, e.g., Peter.Miller@super.com
2 Firstname text null e.g., Peter
3 Lastname text null e.g., Miller
4 registration date not null date of registration, e.g., 06-Feb-2024
5 IP text not null e.g., 84.119.108.23, IP address of web client during initial subscription
6 status int not null 1=in-limbo
2=active
3=inactive
4=bounced during registration
5=bounced
7 token text not null e.g., uIYkEk+ylks=
computed with
$token = base64_encode(random_bytes(8));

State diagram for status is as below.

graph LR A(1=in-limbo) --> B(2=active) B --> C(3=inactive) A --> D(4=bounced during registration) B --> E(5=bounced)

Create script for SQLite is like this:

drop table subscription;

create table subscription (
    email       text primary key,
    firstname   text,
    lastname    text,
    registration    date not null,
    IP          text not null,
    status      int not null,
    token       text not null
);

The following SQL statements will be used:

  1. During sending out the newsletter: select email, firstname from subscription where status=1
  2. New subscriber: insert into subscription (email,firstname,lastname,registration,IP,status,token) values (...)
  3. Checking correct token: select token from subscription where email=:m
  4. Updating status column: update subscription set status=:s where email=:m

The following columns could be added to better cope with malicious users.

Nr. Column type nullable Example or meaning
8 lastRegist date null date of last registration, relevant only for multiple subscriptions for the same e-mail
9 lastIP text null last used IP of the web client, when used for multiple subscriptions

3. Sketch of solution. Here are considerations and requirements for a simple newsletter software.

  1. Programming this application in PHP is preferred as this can be installed on many hosting providers, which offer PHP, e-mail, DNS, etc.
  2. Have one single database table, called subscription, see above.
  3. Periodically reads incoming e-mails for new subscribers or unsubscription requests.
  4. New subscribers add an entry to the subscription table.
  5. Subscription requests will generate a random token, which is sent to the e-mail address.
  6. Unsubscribe requests set the status column to inactive in the subscription table.
  7. During deployment of a new post on the static site, or by manual start, send an e-mail to all recipients on the subscription table, which are active.
  8. The IP address of the registrating web client is stored. With this we can defend against flooding of e-mail addresses, which all bounce. For example, this IP address can then be blocked in the firewall of the web-server.

The token does not need to be overly confidential. Its purpose is to defend against funny/stupid/malicious actors, who want to unsubscribe people against their will.

Handling of e-mails: For reading e-mail you can use imap_headers(), for sending imap_mail(). Also see Sending email using PhpMailer with Gmail XOAUTH2, and Gmail Email Inbox using PHP with IMAP.

Subscribing to the mailing list works with an empty e-mail that states Subscribe in the subject line. For unsubscribing you send Unsubscribe in the subject line and the token in the body part. These two operations are also supported by a simple web-form, which essentially asks for the e-mail address and the token from the user and then sends the confirmation e-mail and sets the status in the subscription table.

Reading e-mails is done every 20 minutes, e.g., controlled by cron. The reading process then analyses the subject field for Subscribe and Unsubscribe. This process also checks for any bounces. In case of a bounce the status flag is set to either bounced or bounced during registration. No distinction is made betweeen hard or soft bounces.

A subscription request makes an entry in the subscription table and sets the status column to in-limbo. The sender receives an e-mail, which he must confirm by e-mail or web form. Once the confirming e-mail is received or the web form is used to confirm then the status column is set to active. If a new subscription request is made with an already existing e-mail address then a new token is generated and sent, and the status remains its previous status, e.g., it might remain active or in-limbo.

If a malicious user subscribes to multiple e-mail addresses, which he does not own, then all these e-mail addresses are set to in-limbo. If the legitimate user now wants to subscribe, he can do so without fuss, because new tokens are sent out for any subscription requests. This prevents that e-mail addresses are blocked, which are not confirmed.

4. Web forms. The HTML form for processing subscribe and unsubscribe requests looks very simple:

First name:
Last name:  
E-mail address:
Token:   (only required for unsubscribe)
               

Changing your e-mail address is done by subscribing to the new address, and then unsubscribing from the old one.

If you have lost or deleted the token for unsubscribing, then simply subscribe again with the same e-mail address. A token will be sent to you, which you then can use for unsubscribing.

While the e-mail address is mandatory, the first and last name are optional.

The actual e-mailing can be done with below simple HTML form:

Greeting:   Firstname will be taken
Content:  
                   

The following e-mails are sent depending on the circumstances:

  1. Once a user has entered his name and e-mail on the HTML form, he will be sent an e-mail to confirm his e-mail address with the generated token.
  2. If the user has unsubscribed from the mailing list, he will receive a confirmation e-mail, which confirms that he has unsubscribed. If the token is wrong then no e-mail will be sent.
  3. The actual content is sent to all members stored in the subscription table, which are active. I.e., this is the whole purpose of maintaining this e-mail list.

5. Effort estimation. I expect the whole code for this to be no more than 1kLines of PHP code. I expect the following PHP programs/files:

  1. Handling the web form.
  2. Run through cron and checking for new subscription or unsubscription requests. Checking for bounces.
  3. Configurations for user-id, password, and hostname for e-mail host.
  4. Sending an e-mail to each recipient in the subscription table, either by using a web form, or via command-line, taking a text file as input.

Possible problems ahead due to hosting limitations:

  1. If you want to use Google Mail as mail provider you will encounter their limit of 500 mails per day.
  2. Yahoo seems to have a limit of 500 mails per day.
  3. Outlook also has a 500 mails per day limit.
  4. IONOS imposes a 500 mails per hour limit.
  5. Hetzner similarly restricts to 500 mails per hour.
  6. Amazon SES has a limit of 200 mails per day

To counter above limits somewhat, you can split your e-mails into batches, i.e., send 500 e-mails the first hour, then another 500 mails the next hour. For this you need an additional table, which stores the batch-number, and the message text to be sent. Obviously, you will not actually send 500 e-mails, but rather 450 or so, to cope for the confirmation mails for new subscribers or unsubscribers.

I am quite surprised that a Google search didn't reveal any program, which already does something similar. The most resembling is this phpList.

]]>
https://eklausmeier.goip.de/blog/2024/02-10-stabilitaet-und-polynome https://eklausmeier.goip.de/blog/2024/02-10-stabilitaet-und-polynome Stabilität und Polynome Sat, 10 Feb 2024 11:00:00 +0100 1. Satz: Stabilitätskriterium von Routh/Hurwitz, nach Routh, Edward John (1831--1907), Hurwitz, Adolf (1859--1919).

Voraussetzungen: Es sei

$$ p(z) = a_0z^n + a_1z^{n-1} + \cdots + a_{n-1}z + a_n = a_0 (z - \lambda_1) \ldots (z - \lambda_n) $$

ein beliebiges komplexes Polynom mit Koeffizienten $a_i\in\mathbb{C}$ und Nullstellen $\lambda_i\in\mathbb{C}$. Weiter sei

$$ \displaylines{ \Delta_1 = a_1, \qquad \Delta_2 = \left|\matrix{a_1&a_3\cr a_0&a_2\cr}\right|, \qquad \Delta_3 = \left|\matrix{a_1&a_3&a_5\cr a_0&a_2&a_4\cr 0&a_1&a_3\cr}\right|, \quad\ldots, \cr \Delta_n = \left|\matrix{ a_1 & a_3 & \ldots\cr a_0 & a_2 & \ldots\cr & a_1 & a_3 & \ldots\cr & a_0 & a_2 & \ldots\cr && a_1 & a_3 & \ldots\cr && a_0 & a_2 & \ldots\cr &&& \ddots & \ddots\cr & 0 &&& a_1 & a_3 & \ldots\cr & &&& a_0 & a_3 & \ldots\cr }\right|, \cr } $$

mit der Vereinbarung $a_{n+1}=a_{n+2}=\cdots=0$.

Behauptung: $\mathop{\rm Re}\nolimits \lambda_i<0$ genau dann, wenn

$$ a_0\Delta_1\gt 0,\: \Delta_2\gt 0,\: a_0\Delta_3\gt 0,\: \Delta_4\gt 0,\: \ldots,\: \cases{a_n\Delta_n\gt 0, & $n$ gerade,\cr \Delta_n\gt 0, & $n$ ungerade.\cr} $$

Für $a_0>0$ also $\Delta_i>0$, $i=1,\ldots,n$.

Beweis: Siehe das Buch von Gantmacher, Felix Ruvimovich (1908--1964), Gantmacher (1986), §16.6, "Matrizentheorie", Springer-Verlag, Berlin Heidelberg New York Tokyo, Übersetzung aus dem Russischen von Helmut Boseck, Dietmar Soyka und Klaus Stengert, 1986, 654 S.     ☐

Der obige Satz ist ein Spezialfall des allgemeinen Satzes von Routh/Hurwitz, der es erlaubt die genaue Anzahl der Nullstellen mit echt negativen Realteil genau anzugeben. Der folgende Satz von Liénard/Chipart aus dem Jahre 1914 hat gegenüber dem Stabilitäskriterium von Routh/Hurwitz den Vorteil, nur etwa halb so viele Minoren auf ihr Vorzeichen zu untersuchen.

2. Satz: Stabilitätskriterium von Liénard/Chipart nach Chipart, A.H., Liénard, Alfred-Marie (1869--1958).

Behauptung: $\mathop{\rm Re}\nolimits \lambda_i<0$ ist äquivalent zu einer der folgenden 4 Aussagen:

(1)     $a_n>0$, $a_{n-2}>0$, $\ldots$; $\Delta_1>0$, $\Delta_3>0$, $\ldots$,

(2)     $a_n>0$, $a_{n-2}>0$, $\ldots$; $\Delta_2>0$, $\Delta_4>0$, $\ldots$,

(3)     $a_n>0$, $a_{n-1}>0$, $a_{n-3}>0$, $\ldots$; $\Delta_1>0$, $\Delta_3>0$, $\ldots$,

(4)     $a_n>0$, $a_{n-1}>0$, $a_{n-3}>0$, $\ldots$; $\Delta_2>0$, $\Delta_4>0$, $\ldots.$

Beweis: Siehe erneut Gantmacher (1986), §16.13.     ☐

Für die Überprüfung eines vorgelegten Polynoms wählt man dann zweckmässigerweise von den vier Bedingungen diejenige, sodaß $\Delta_{n-1}$ oder $\Delta_n$ die geringere Zeilenzahl hat.

]]>
https://eklausmeier.goip.de/blog/2024/02-09-formel-von-faa-di-bruno https://eklausmeier.goip.de/blog/2024/02-09-formel-von-faa-di-bruno Die Formel von Faà di Bruno Fri, 09 Feb 2024 21:00:00 +0100 Die Formel von Faà di Bruno, Faà di Bruno, Francesco (1825--1888), verallgemeinert die Kettenregel auf die Form für beliebig hohe Ableitungen.

1. Satz: Formel von Faà di Bruno Es hänge $w$ von $u$ ab, $u$ ist hierbei Funktion von $x$. Es sei $D_x^k u$ die $k$-te Ableitung von $u$ nach $x$. Dann gilt

$$ D_x^n w = \sum_{j=0}^n \sum_{\scriptstyle{k_1+k_2+\cdots+k_n=j}\atop {\scriptstyle{k_1+2k_2+\cdots+nk_n=n}\atop \scriptstyle{k_1,k_2,\ldots,k_n\ge0}}} {n!{\mskip 3mu} D_u^j w\over k_1! (1!)^{k_1} \cdots k_n! (n!)^{k_n}} (D_x^1 u)^{k_1} \ldots D_x^n u)^{k_n}. $$

Beweis: Siehe Knuth, Donald Ervin (*1938), The Art of Computer Programming, Volume 1 -- Fundamental Algorithms, Addison-Wesley Publishing Company, Reading (Massachusetts) Menlo Park (California) London Sydney Manila, 1972, second printing, xxi+634 S. Siehe McEliece im o.a. Buch von Knuth, McEliece, Robert James. Bezeichnet $c(n,j,k_1,k_2,\ldots)$ den Bruchterm, so rechnet man durch Differenzieren

$$ \eqalignno{ c(n+1,j,k_1,\ldots){}={}& c(n,j-1,k_2,\ldots)\cr & {}+(k_1+1){\mskip 3mu}c(n,j,k_1+1,k_2-1,k_3,\ldots)\cr & {}+(k_2+1){\mskip 3mu}c(n,j,k_1,k_2+1,k_3-1,k_4,\ldots) + \ldots {\mskip 3mu}. } $$

Hierbei ist es von Vorteil unendlich viele $k_i$ anzunehmen, obwohl $k_{n+1}=k_{n+2}=\cdots=0$. Im Induktionsschritt sind $k_1+\cdots+k_n=j$ und $k_1+2k_2+\cdots+nk_n=n$ Invarianten. Man kann nun $n! / k_1! (1!)^{k_1} k_2! (2!)^{k_2}\ldots$ kürzen und gelangt dann zu $k_1+2k_2+\cdots=n+1$. Man vgl. auch Bourbaki und Schwartz.     ☐

]]>
https://eklausmeier.goip.de/blog/2024/02-08-taylorformel-fuer-vektorfunktionen https://eklausmeier.goip.de/blog/2024/02-08-taylorformel-fuer-vektorfunktionen Taylorformel für Vektorfunktionen Thu, 08 Feb 2024 21:00:00 +0100 Aus dem Eindimensionalen sind das Lagrangesche und Schlömilchsche Restglied bekannt. Lagrange, Joseph Louis (1736--1813), Schlömilch, Otto (1823--1901).

$$ \eqalignno{ f(x) &= \sum_{k=0}^n {f^{(k)}(a)\over k!}(x-a)^k + {1\over n!}\int_a^x (x-t)^n f^{(n+1)}(t) dt\cr &= \sum_{k=0}^n {f^{(k)}(a)\over k!}(x-a)^k + {f^{(n+1)}(\xi)\over(n+1)!}(x-a)^{n+1} \qquad\hbox{(Lagrange)}\cr &= \sum_{k=0}^n {f^{(k)}(a)\over k!}(x-a)^k + o(\left|x-a\right|^n)\cr &= \sum_{k=0}^n {f^{(k)}(a)\over k!}(x-a)^k + {f^{(n+1)}(\xi)\over p\cdot n!}(x-\xi)^{n+1-p} (x-a)^p. \qquad\hbox{(Schlömilch)}\cr } $$

Diese Darstellungen für $f$ lassen sich für vektorwertige Funktionen entsprechend verallgemeinern. Wie im Eindimensionalen liegt auch hier wieder das Schwergewicht auf der Gewinnung von Restgliedformeln, oder mit den Worten von Mangoldt und Knopp: (Mangoldt, Hans Carl Friedrich von (1854--1925, Knopp, Konrad Hermann Theodor (1882--1957))

Ausdrücklich sei noch einmal betont, daß der wesentliche Inhalt des Taylorschen Satzes nicht darin besteht, daß ein Ansatz der Form

$$f(x_0+h)=f(x_0)+{f'(x_0)\over1!}h+{f''(x_0)\over2!}h^2+\cdots+ {f^{(n)}(x_0)\over n!}h^n+R_n $$

überhaupt gemacht werden kann. Das ist vielmehr unter der alleinigen Voraussetzung, daß $f^{(n)}(x_0)$ existiert, für jedes seinem Betrage nach hinreichend kleines $h$ unter allen Umständen möglich. $\ldots$ $R_n$ ist lediglich eine abkürzende Bezeichnung für die Differenz der linken Seite und der Summe dieser $(n+1)$ ersten Summanden der rechten Seite. Das Schwergewicht des Problems und damit der allein wesentliche Inhalt des Taylorschen Satzes liegt ausschließlich in den Aussagen, die über dieses Restglied gemacht werden können.

1. Defintion: (Multiindizes) Für $\alpha=(\alpha_1,\ldots,\alpha_n)\in\mathbb{N}^n$ sei die Ordnung eines Multiindex und die Multifakultät definiert zu

$$ \left|\alpha\right| := \alpha_1+\cdots+\alpha_n, \qquad \alpha! := \alpha_1! \alpha_2! \cdot\ldots\cdot \alpha_n! $$

Ist $f$ eine $\left|\alpha\right|$-mal stetig differenzierbare Funktion, so sei die ^{Multiableitung} gesetzt zu

$$ D^\alpha f := D_1^{\alpha_1} D_2^{\alpha_2} \ldots D_n^{\alpha_n} f = {\partial^{\left|\alpha\right|} f\over \partial x_1^{\alpha_1} \cdots \partial x_n^{\alpha_n} }, $$

insbesondere $D_i^{\alpha_i}=D_i\ldots D_i$ ($i$ mal). Die ^{Multipotenz} für einen Vektor $x$ ist

$$ x^\alpha := x_1^{\alpha_1} x_2^{\alpha_2} \cdot\ldots\cdot x_n^{\alpha_n}{\mskip 5mu}. $$

Nach dem Satz von H.A. Schwarz, Schwarz, Hermann Armandus (1843--1921), ist die Reihenfolge des Differenzierens nach verschiedenen Variablen unerheblich, bei genügend glatter Funktion $f$.

2. Lemma: Es gilt

$$ (x_1+x_2+\cdots+x_n)^k = \sum_{\left|\alpha\right|=k} {k!\over\alpha!} x^\alpha, \qquad\forall k\in\mathbb{N}. $$

Beweis: Durch Induktion nach $n$, wenn man die Binomische Formel voraussetzt. Man rechnet über Induktion nach $k$, wenn man dies nicht benutzen will. Für $n=1$ ist die Behauptung klar. Für den Induktionsschluß klammert man $[x_1+(x_2+\cdots+x_n)]^k$.     ☐

Entsprechend gilt

$$ p(x) := (h_1x_1+\cdots+h_nx_n)^k = \sum_{\left|\alpha\right|=k} {k!\over\alpha!} h^\alpha x^\alpha, $$

also

$$ p(D)f = \left( \sum_{i=1}^n h_iD_i \right)^k f = \sum_{\left|\alpha\right|=k} {k!\over\alpha!} D^\alpha f{\mskip 3mu}h^\alpha. $$

Generalvoraussetzung: $f\colon U\subset\mathbb{R}^n\rightarrow\mathbb{R}$ sei $k$-mal stetig differenzierbar auf der offenen Menge $U$. Es sei $x\in U$ und $h\in\mathbb{R}^n$ derart, daß $x+th\in U$, $\forall t\in[0,1]$. Es sei $g\colon[0,1]\rightarrow\mathbb{R}$, mit $g(t):=f(x+th)$.

3. Hilfssatz: Die Funktion $g$ ist $k$-mal stetig differenzierbar und

$$ g^{(k)}(t) = \sum_{\left|\alpha\right|=k} {k!\over\alpha!} D^\alpha f(x+th){\mskip 3mu}h^\alpha. $$

Beweis: Induktion nach der Ordnung des Multiindex, also nach $k$. Für $k=1$ ist nach der Kettenregel

$$ g'(t) = \mathop{\rm grad} f(x+th)\cdot h = \sum_{i=1}^n D_i f(x+th){\mskip 3mu}h. $$

Induktionsschluß von $(k-1)\rightarrow k$:

$$ g^{(k-1)}(t) = \sum_{\left|\alpha\right|=k-1} {(k-1)!\over\alpha!} h^\alpha{\mskip 3mu} D^\alpha f(x+th) = \underbrace{\left[\sum_{i=1}^n (h_i D_i)^{k-1} f\right]}_{=:{\mskip 5mu}S} (x+th); $$

Anwenden der Kettenregel und des Lemmas liefert

$$ g^{(k)}(t) = \left[\left(\sum_{i=1}^n h_i D_i\right) S\right] (x+th) = \left[\left(\sum_{i=1}^n h_i D_i\right)^k f \right] (x+th) = \sum_{\left|\alpha\right|=k} {k!\over\alpha!} h^\alpha \left(D^\alpha f\right)(x+th). $$

    ☐

4. Satz: Satz von Taylor, Taylor, Brook (1685--1731). Es sei $f$ jetzt sogar $(k+1)$-mal stetig differenzierbar. Dann existiert ein $\theta\in[0,1]$, so daß

$$ f(x+h) = \sum_{\left|\alpha\right|\le k} {D^\alpha f(x)\over\alpha!} h^\alpha + \sum_{\left|\alpha\right|=k+1} {D^\alpha f(x+\theta h)\over\alpha!} h^\alpha. $$

Beweis: $g$ ist wie $f$ mindestens $(k+1)$-mal stetig differenzierbar. Nach der Taylorformel für eine Veränderliche existiert ein $\theta\in[0,1]$, so daß

$$ g(1) = \sum_{m=0}^k {g^{(m)}(0)\over m!} + {g^{(k+1)}(\theta)\over(k+1)!}. $$

Einsetzen der im Hilfssatz ermittelten Formeln liefert unmittelbar das Ergebnis.     ☐

5. Corollar: Es sei $f$ mindestens $k$-mal stetig differenzierbar und es sei $h$ hinreichend klein. Dann gilt

$$ f(x+h) = \sum_{\left|\alpha\right|\le k} {D^\alpha f(x)\over\alpha!} h^\alpha + o(\left\|h\right\|^k), $$

dabei steht $o(\left|h\right|^k)$ als Abkürzung für eine Funktion $\varphi$ mit $\varphi(0)=0$ und

$$ \lim_{\scriptstyle h\to0\atop\scriptstyle h\ne0} {\varphi(h)\over\left\|h\right\|^k} = 0. $$

Beweis: Nach dem vorhergehenden Satz gibt es ein von $h$ abhängiges $\theta\in[0,1]$, mit

$$ f(x+h) = \sum_{\left|\alpha\right|\le k+1} {D^\alpha f(x)\over\alpha!} h^\alpha + \sum_{\left|\alpha\right|=k} {D^\alpha f(x+\theta h)\over\alpha!} h^\alpha = \sum_{\left|\alpha\right|\le k-1} {D^\alpha f(x)\over\alpha!} h^\alpha + \sum_{\left|\alpha\right|=k} r_\alpha(h){\mskip 3mu}h^\alpha, $$

wobei

$$ r_\alpha(h) = {D^\alpha f(x+\theta h) - D^\alpha f(x)\over\alpha!}. $$

Wegen der vorausgesetzten Stetigkeit von $D^\alpha f$ verschwindet $r_\alpha(\cdot)$ bei 0, also $\displaystyle\lim_{h\to0} r_\alpha(h)=0$. Setzt man

$$ \varphi(h) := \sum_{\left|\alpha\right|=k} r_\alpha(h){\mskip 3mu}h^\alpha, $$

so folgt $\displaystyle\lim_{h\to0} {\varphi(h) / \left|h\right|^k} = 0$, d.h. $\varphi(h)=o(\left|h\right|^k)$, denn

$$ {\left|h^\alpha\right|\over\left\|h\right\|^k} = { \left|h_1^{\alpha_1}\ldots h_n^{\alpha_n}\right| \over \left\|h\right\|^{\alpha_1}\ldots\left\|h\right\|^{\alpha_n} } \le 1, \qquad\hbox{für}\quad \left|\alpha\right| = k. $$

    ☐

Der Satz von Taylor im $\mathbb{R}^m$ entsteht durch komponentenweise Anwendung der vorherigen Resultate. Man benötigt allerdings $m$ möglicherweise verschiedene Zwischenstellen.

6. Beispiel: Es sei $f\colon\mathbb{R}\rightarrow\mathbb{R}^3$ mit $f(t):=(\sin t,{\mskip 3mu}\cos t,{\mskip 3mu}t)$. Dann ist $f'(t)=(\cos t,{\mskip 3mu}-\sin t,{\mskip 3mu}1)$ und wenn man nur eine einzige Zwischenstelle zulässt erhält man den Widerspruch

$$ f(2\pi)-f(0) = f'(\xi)(2\pi-0) = 2\pi\pmatrix{\cos\xi\cr -\sin\xi\cr 1\cr} = \pmatrix{0\cr 0\cr 2\pi\cr}. $$

Aus $\cos\xi=0=\sin\xi$ folgt $\cos^2\xi+\sin^2\xi=0$.

Literatur: Otto Forster (*1937): Analysis 2.

]]>
https://eklausmeier.goip.de/blog/2024/02-07-differentiation-von-matrizen-und-determinanten https://eklausmeier.goip.de/blog/2024/02-07-differentiation-von-matrizen-und-determinanten Differentiation von Matrizen und Determinanten Wed, 07 Feb 2024 07:00:00 +0100 Wie differenziert man Determinanten, die von einem Parameter abhängen?

1. Satz: Voraussetzungen: Es seien $a_{ij}(\lambda)$ differenzierbare Funktionen. Es sei

$$ \def\multisub#1#2{{\textstyle\mskip-3mu{\scriptstyle1\atop\scriptstyle#2_1}{\scriptstyle2\atop\scriptstyle#2_2}{\scriptstyle\ldots\atop\scriptstyle\ldots}{\scriptstyle#1\atop\scriptstyle#2_#1}}} \def\multisup#1#2{{\textstyle\mskip-3mu{\scriptstyle#2_1\atop\scriptstyle1}{\scriptstyle#2_2\atop\scriptstyle2}{\scriptstyle\ldots\atop\scriptstyle\ldots}{\scriptstyle#2_{#1}\atop\scriptstyle#1}}} \def\multisubsup#1#2#3{{\textstyle\mskip-3mu{\scriptstyle#3_1\atop\scriptstyle#2_1}{\scriptstyle#3_2\atop\scriptstyle#2_2}{\scriptstyle\ldots\atop\scriptstyle\ldots}{\scriptstyle#3_{#1}\atop\scriptstyle#2_{#1}}}} A(\lambda) = \left|\matrix{ a_{11}(\lambda) & \ldots & a_{1n}(\lambda)\cr \vdots & \ddots & \vdots\cr a_{n1}(\lambda) & \ldots & a_{nn}(\lambda)\cr }\right| = \det(a_1,\ldots,a_n), $$

ferner

$$ \alpha\multisubsup rik = (-1)^{i_1+\cdots+i_r + k_1+\cdots+k_r} A\multisubsup r{i'}{k'}, $$

insbesondere $\displaystyle{ \alpha_i^j = (-1)^{i+j} A_{1\ldots\widehat\imath\ldots n}^{1\ldots\widehat\jmath\ldots n}. }$

Behauptung:

$$ \displaystyle{{\partial\over\partial\lambda}A = (\alpha_{11},\ldots,\alpha_{nn}) \pmatrix{a_{11}'\cr \vdots\cr a_{nn}'\cr} = \sum_{i,j=1}^n \alpha_i^j a_{ij}' } = \sum_{i=1}^n \det(a_1,\ldots,a_{i-1},a_i',a_{i+1},\ldots,a_n) $$

Beweis: Entwickelt man $A(\lambda)$ nach dem Laplaceschen Entwicklungssatz nach der $i$-ten Zeile, so erkennt man $\partial A/\partial(a_{ij}) = \alpha_i^j$. Anwenden der Kettenregel liefert die mittleren Identitäten. Die letzte Identität ist nur eine Umsortierung der vorherigen (Laplacescher Entwicklungssatz rückwärts gelesen).     ☐

Man vgl. auch Bourbaki (1976): "Éléments de mathématique: Fonctions d'une variable réelle -- Théorie élémentaire", Hermann, Paris, 1976, 54+38+69+46+55+31+38 S. = 331 S.

2. Die Jacobimatrizen einiger Matrizenfunktionen, wie Spur, Determinante, Matrizenprodukt.

Es sei $y=f(x_{11},\ldots,x_{1n},x_{21},\ldots,x_{2n},\ldots,x_{m1},\ldots,x_{mn})$ eine reelle Funktion in $mn$ Veränderlichen, also $y=f(X)$. Es bezeichne

$$ {dy\over dX} := \left(\partial y\over\partial x_{ij}\right) _{\scriptstyle{i=1,\ldots,m}\atop\scriptstyle{j=1,\ldots,n}} . $$

Im Falle $X=(x_1,\ldots,x_n)$ ist ${{dy\over dX}=\nabla y}$.

3. Satz: (1)     $\displaystyle{{d{\mskip 5mu}ax\over dx} = a}$,     $\displaystyle{{d{\mskip 5mu}x^\top Ax\over dx} = 2Ax}$,     ($A=A^\top$).

(2)     $\displaystyle{{d{\mskip 5mu}\ln\det X\over dX} = (X^\top)^{-1}}$,     $\displaystyle{{d{\mskip 5mu}\det X\over dX} = (\det X)^{-1} (X)^{-1}}$.

(3)     $\def\tr{\mathop{\rm tr}}\displaystyle{{d{\mskip 5mu}\tr X^{-1}A\over dX} = -(X^{-1} A X^{-1})^\top}$.

Beweis: (1) ist klar. Bei (2) beachte man

$$ {\partial\over\partial x_{ij}}\det X = \alpha_i^j = (-1)^{i+j} X_{1\ldots\hat\imath\ldots n}^{1\ldots\hat\jmath\ldots n} $$

entsprechend

$$ {\partial\over\partial x_{ij}}\ln\det X = {1\over\det X} \alpha_i^j. $$

Zu (3): Es gelten

$$ {d{\mskip 3mu}X^{-1}\over dx_{ij}} = -X^{-1} E_{ij} X^{-1}, \qquad \tr E_{ij} B = b_{ji}, \qquad {d{\mskip 3mu}\tr B\over dx} = \tr{dB\over dx}. $$

    ☐

]]>
https://eklausmeier.goip.de/blog/2024/02-06-holomorphe-matrixfunktionen https://eklausmeier.goip.de/blog/2024/02-06-holomorphe-matrixfunktionen Holomorphe Matrixfunktionen Tue, 06 Feb 2024 11:00:00 +0100 1. Integraldefinition

1. Sei $f$ eine geeignet gewählte holomorphe Funktion. Dann definiert man für eine quadratische Matrix $A$ die Matrixfunktion $f(A)$ zu

$$ f(A) := {1\over2\pi i}\int_\Gamma f(\lambda) (I\lambda-A)^{-1} d\lambda. $$

Wegen dem Satz von Cauchy, Cauchy, Augustin Louis (1789--1857), hängt $f(A)$ nicht von der Wahl der Kurve $\Gamma$ ab. Offensichtlich ist $S^{-1}f(A)S=f(S^{-1}AS)$, für jede invertierbare $(n\times n)$-Matrix $S$. Ohne Einschränkung kann man deshalb $A$ bei den weiteren Überlegungen als Jordanmatrix, Jordan, Camille (1838--1922), voraussetzen. Also $A = J = \mathop{\rm diag}(J_\nu)_{\nu=1}^k$, wobei $J_\nu$ Jordanblock ist. Es ist

$$ f(J) = {1\over2\pi i}\int_\Gamma f(\lambda) (I\lambda-J)^{-1} d\lambda = \mathop{\rm diag}_{\nu=1}^k \left({1\over2\pi i}\int_\Gamma f(\lambda) (I\lambda-J_\nu)^{-1} d\lambda\right) = \mathop{\rm diag}_{\nu=1}^k f(J_\nu). $$

Viele Behauptungen reduzieren sich damit also sogar lediglich auf die Betrachtung eines einzelnen Jordanblockes $J_\nu$, mit $J_\nu=\lambda_0\delta_{xy}+\left(\delta_{x+1,y}\right)_{x,y=1}^m$.

2. Sei nun $J$ Jordan-Block der Größe $k\times k$ zum Eigenwert $\lambda_0$. Dann gilt

$$ f(J) = \pmatrix{ f(\lambda_0) & {1\over1!}f'(\lambda_0) & \ldots & {1\over(k-1)!}f^{(k-1)}(\lambda_0)\cr 0 & f(\lambda_0) & \ldots & \cr \vdots & \vdots & \ddots & \vdots\cr 0 & 0 & \ldots & f(\lambda_0)\cr } $$

Insbesondere für die spezielle Funktion $f(\lambda):=\lambda^n$ ergibt sich

$$ J^n = \pmatrix{ \lambda^n & {n\choose1}\lambda^{n-1} & \ldots & {n\choose k-1}\lambda^{n-k+1}\cr 0 & \lambda^n & \ldots & \cr \vdots & \vdots & \ddots & \vdots\cr 0 & 0 & \ldots & \lambda^n\cr }, $$

wobei $\lambda^{-j}:=0$, für $j\in\mathbb{N}$.

3. Diese Darstellungen finden ihre Begründung durch den folgenden Satz, obwohl für den Fall $\lambda^n$ die Darstellung auch leicht direkt unter Benutzung von $J^n = (\lambda I + N)^n$, mit geeignetem Nilpotenzblock $N$ und der binomischen Formel bewiesen werden kann. Man braucht dann nicht den ganzen Weg über Matrizenfunktionen zu gehen. Möchte man die Integraldarstellung stärker berücksichtigen rechnet man wie folgend. Allgemein ist $f(A)=(1/2\pi i)\int_\Gamma f(z)(Iz-A)^{-1}dz$. Entwicklung des Cauchy-Kernes liefert

$$ (Iz-A)^{-1} = {1\over z} \sum_{\nu=0}^\infty \left(A\over z\right)^\nu, \qquad \mathopen|z\mathclose| \gt \rho(A). $$

Dann berechnet man das Residuum durch Vertauschen von Integration und Summation zu

$$ {1\over2\pi i} \int_\Gamma z^k (Iz-A)^{-1} dz = {1\over2\pi i} \int_\Gamma z^k {1\over z} \left(I+{A\over z}+\cdots+{A^k\over z^k}+\cdots\right) dz = A^k. $$

4. Satz: Es ist

$$ {1\over2\pi}\int_\Gamma (I\lambda-A)^{-1}d\lambda = I,\qquad {1\over2\pi}\int_\Gamma \lambda(I\lambda-A)^{-1}d\lambda = A. $$

Sind $f$ und $g$ holomorph auf (möglicherweise verschiedenen) Umgebungen des Spektrums von $A$, so gilt

$$ (\alpha f+\beta g)=\alpha f(A)+\beta g(A),\qquad (f\cdot g)(A)=f(A){\mskip 3mu}g(A). $$

Beweis: Es genügt, w.o. bemerkt, sich auf ein einziges Jordankästchen $J$ der Größe $m\times m$ zu beschränken. Es sei $\Gamma$ ein positiv orientierter Kreis um $\lambda_0$. Es ist

$$ \eqalign{ (I\lambda-J)^{-1} &= {I\over\lambda-\lambda_0} + {N\over(\lambda-\lambda_0)^2} + \cdots + {N^{m-1}\over(\lambda-\lambda_0)^m} \cr &= \pmatrix{ (\lambda-\lambda_0)^{-1} & (\lambda-\lambda_0)^{-2} & \ldots & (\lambda-\lambda_0)^{-m}\cr & \ddots & \ddots & \vdots\cr & & \ddots & (\lambda-\lambda_0)^{-2}\cr 0 & & & (\lambda-\lambda_0)^{-1}\cr }, \cr } $$

wobei $N = (\delta_{x+1,y})_{x,y}^m$, also $N^m=0\in\mathbb{C}^{m\times m}$ ist. Wegen $\int_\Gamma d\lambda/(\lambda-\lambda_0)=2\pi i$, und $\int_\Gamma (\lambda-\lambda_0)^k d\lambda=0$, für $k\in\mathbb{Z}\setminus\{-1\}$ gilt offensichtlich ${1\over2\pi i}\int_\Gamma (I\lambda-J)d\lambda=I$ und

$$ {1\over2\pi i}\int_\Gamma \lambda{\mskip 3mu}(I\lambda-J)^{-1}d\lambda = {1\over2\pi i}\int_\Gamma \left((\lambda-\lambda_0)+\lambda_0\right)(I\lambda-J)^{-1}d\lambda = N + I\lambda_0 = J. $$

Die additive Linearität ist klar. Für die multiplikative Aussage schließt man: Ist $f(\lambda)=\sum_{k=0}^\infty (\lambda-\lambda_0)^k f_k$ und $g(\lambda)=\sum_{k=0}^\infty (\lambda-\lambda_0)^k g_k$, so ist $f(\lambda)g(\lambda)=\sum_{k=0}^\infty (\lambda-\lambda_0)^k h_k$, mit $h_k=\sum_{i=0}^k f_i g_{k-i}$. Folglich

$$ \eqalign{ f(J){\mskip 3mu}g(J) &= \pmatrix{ f_0 & f_1 & \ldots & f_{m-1}\cr & \ddots & & \vdots\cr 0 & & \ddots & f_1\cr & & & f_0\cr} \cdot \pmatrix{ g_0 & g_1 & \ldots & g_{m-1}\cr & \ddots & & \vdots\cr 0 & & \ddots & g_1\cr & & & g_0\cr} \cr &= \pmatrix{ h_0 & h_1 & \ldots & h_{m-1}\cr & \ddots & & \vdots\cr 0 & & \ddots & h_1\cr & & & h_0\cr} = (f\cdot g)(J). \cr } $$

    ☐

Mit der Darstellung für $J^n$ ergibt sich leicht der folgende Sachverhalt.

5. Satz: Sei $J$ eine beliebige Jordanmatrix. Dann gelten:

(1) $J^n\to0$ genau dann, wenn $\left|\lambda\right| < 1$.

(2) $\sup_{n=1}^\infty|J^n|\le\rm const$ genau dann, wenn $\left|\lambda\right| \le 1$ und zu Eigenwerten vom Betrage 1 nur lineare Elementarteiler gehören, also die Jordanblöcke zum Eigenwert 1 stets von der Größe $(1\times 1)$ sind.

Wegen $A=XJY$, $Y=X^{-1}$ und damit $A^n=XJ^nY$ und wegen $\left|A^n\right|\le\left|X\right|\cdot\left|J^n\right|\cdot\left|Y\right|$, erhält man daher für eine beliebige quadratische Matrix $A$ den folgenden Satz.

6. Satz: Seien $\lambda_i$ für $i=1,\ldots k$, die Eigenwerte der Matrix $A$. Dann gelten

(1) $\def\mapright#1{\mathop{\longrightarrow}\limits^{#1}}|A|\mapright{n\to\infty}0$ genau dann, wenn $|\lambda_i|<1$, für alle $i=1,\ldots,k$, und

(2) $|A^n|$ beschränkt für alle $n\in\mathbb{N}$ genau dann, wenn $|\lambda_i|\le1$ und zu Eigenwerten vom Betrage 1, nur $(1\times 1)$-Jordanblöcke korrespondieren.

7. Bemerkung: Es gelten die Äquivalenzen

$$ \rho(A)\lt 1 \iff A^n\to0 \iff \sum_{n=0}^\infty A^n = (I-A)^{-1} \iff \left|\sum_{n=0}^\infty A^n\right|\lt \infty . $$

Beweis: Zu: $\sum_{n=0}^\infty A^n=(I-A)^{-1}$, falls $\rho(A)<1$. Ist $\lambda$ Eigenwert von $A$, so ist $(1-\lambda)$ Eigenwert von $(I-A)$. Wegen $|\lambda|<1$, ist $(I-A)$ invertierbar. Weiter

$$ \eqalign{ & I = (I-A)(I+A+\cdots+A^n)+A^{n+1}{\mskip 3mu} \cr \Rightarrow{\mskip 3mu} & (I-A)^{-1} = (I+A+\cdots+A^n)+(I-A)^{-1}A^{n+1}. \cr } $$

Somit gilt für alle $n\in\mathbb{N}$

$$ \bigl|(I-A)^{-1}-(I+A+\cdots+A^n)\bigr| \le \left|(I-A)^{-1}\right|\cdot\left|A^{n+1}\right| $$

und damit folgt wegen $A^n\to0$, die Behauptung. Die Rückrichtung $\rho(A)<1$, falls $\sum A^n = (I-A)^{-1}$ ist klar aufgrund der notwendigen Konvergenzbedingung für die Reihe. Die restlichen Äquivalenzen ergeben sich u.a. mit Hilfe des vorhergehenden Satzes und sind offensichtlich.     ☐

8. Eine andere Anwendung für die Darstellung von $J^n$, ist die Lösungsdarstellung für homogene, lineare Differenzengleichungen mit konstanten Koeffizienten. Durch Übergang von der Begleitmatrix zur Jordanmatrix erkennt man dann recht schnell die Lösungsdarstellung für die Differenzengleichung. Es ist

$$ % \begingroup\let\oldleft=\left \let\oldright=\right \def\left#1{\oldleft|} \def\right#1{\oldright|} \begin{vmatrix} && \leftarrow\lambda & \leftarrow\lambda & \ldots & \leftarrow\lambda & \leftarrow\lambda\cr &I\lambda & -I & 0 & \ldots & 0 & 0\cr &0 & I\lambda & -I & \ldots & 0 & 0\cr &\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\cr & &&&& I\lambda & -I\cr &A_0 & A_1 & & \ldots & A_{\ell-1} & I\lambda+A_{\ell-1}\cr \end{vmatrix} % \endgroup = \left|\matrix{ 0 & -I & 0 & \ldots & 0\cr 0 & 0 & -I & \ldots & 0\cr \vdots & \vdots & \vdots & \ddots & \vdots\cr &&&& -I\cr L(\lambda) & * & \ldots & * & I\lambda+A_{\ell-1}\cr }\right| $$

also

$$ \left|I\lambda-C_1\right| = \det L(\lambda). $$

9. Satz: Voraussetzung: Es habe $L(\lambda)=\lambda^\ell+a_{\ell-1}\lambda^{\ell-1}+\cdots+a_0 \in \mathbb{C}$ die Faktorisierung

$$ L(\lambda) = (\lambda-\mu_1)^{\eta_1} (\lambda-\mu_2)^{\eta_2} \ldots (\lambda-\mu_k)^{\eta_k}. $$

Behauptung: Der Lösungsraum der homogenen, linearen Differenzengleichung $a_{m+\ell}+a_{\ell-1}x_{m+\ell-1}+\cdots+a_0x_m=0$ hat die Dimension $\ell$ und wird aufgespannt von

$$ x_m = \sum_{\nu=1}^k p_\nu(m) \mu_\nu^m, \qquad m=0,1,\ldots, $$

wobei $\mathop{\rm grad} p_\nu=\eta_\nu-1$, $\nu=1,\ldots,k$. Der Fall $\mathop{\rm grad} p_\nu=0$ bedeutet dabei Konstante.

Beweis: Sei $u_m:=(x_{m-1+\ell},\ldots,x_m)\in\mathbb{C}^\ell$. Die Lösung der Differenzengleichung $L(E)x_m=0$ lautet $u_m = C_1^m u_0 = X J^m Y u_0$, wobei $Y=X^{-1}$ die Matrix der Linksjordanvektoren und $X$ die Matrix der Rechtsjordanvektoren ist. Die Multiplikation von links mit $X$ und von rechts mit $Y$ bewirkt eine Vermischung der einzelnen Jordankästchen. Nach Ausklammern von gemeinsamen Faktoren stehen vor $\mu_\nu$ Summen von Binomialkoeffizienten $m\choose\rho_\nu$, $0\le\rho_\nu<\eta_\nu$, $\nu=1,\ldots,k$, also Polynome in $m$. Da $C_1$ stets nicht-derogatorisch ist -- betrachte Minor $(C_1)_{1,\ldots,n-1}^{2,\ldots,n}$ -- beträgt der Grad von $p_\nu$ genau $\eta_\nu-1$, wegen $\mathop{\rm grad}{m\choose\eta_\nu-1}=\eta_\nu-1$. Aufgrund von $\sum\eta_\nu=\ell$ hat man insgesamt $\ell$ freie Parameter. Noch zu zeigen: die lineare Unabhängigkeit der angegebenen Lösung.     ☐

10. Corollar: Die Folgen $(m^i{\mskip 3mu}\mu_\nu^m)$, $i=0,\ldots,\eta_\nu-1$, für $\nu=1,\ldots,\ell$, bilden eine Basis für den Lösungsraum der Differenzengleichung.

2. Homomorphismus in obere Dreiecksmatrizen

1. Es gibt auch einen anderen Zugang zu holomorphen Matrixfunktionen, siehe den Artikel der beiden Autoren Yasuhiko Ikebe und Toshiyuki Inagaki, Ikebe/Inagaki (1986), "An Elementary Approach to the Functional Calculus for Matrices", The American Mathematical Monthly, Vol 93, No 3, May 1986, pp.390--392

Sei $f$ in einer Umgebung von $\{\lambda_1,\ldots,\lambda_r\}$ genügend oft differenzierbar. Für ein festes $n\in\mathbb{N}$ setzt man

$$ f^*(z) := \pmatrix{ f(z) & f'(z) & f''(z)/2! & \ldots & f^{(n-1)}(z)/(n-1)!\cr & f(z) & f'(z) & \ldots & \vdots\cr & & \ddots & \ddots & f''(z)/2!\cr 0 & & & \ddots & f'(z)\cr & & & & f(z)\cr } $$

Für $f(z)=z$ ergibt sich

$$ f^*(z) = \pmatrix{ \lambda & 1 & \ldots & 0\cr & \ddots & \ddots & \vdots\cr & & \ddots & 1\cr & & & \lambda\cr } = J, $$

d.h. also ein einfacher Jordanblock der Größe $n\times n$ zum Eigenwert $\lambda$. Mit $J$ sei stets ein solcher Jordanblock gemeint. Ist $f(z)\equiv c=\rm const$, so ist $f^*(z)=cI$.

Die Abbildung $*\colon f\rightarrow f^*$ ist ein Homomorphismus der Algebra der analytischen Funktionen in einer Umgebung von $\{\lambda_1,\ldots,\lambda_r\}$ in die kommutative Algebra der oberen Dreiecksmatrizen.

2. Satz: (Homomorphiesatz) Es gelten

(1)     $(f+g)^* = f^* + g^*$,     Additivität,

(2)     $(cf)^* = c{\mskip 3mu}f^*$, $c\in\mathbb{C}$ fest,     Homogenität,

(3)     $(fg)^* = f^* {\mskip 3mu} g^* = g^* {\mskip 3mu} f^*$,     Multiplikation und Kommutativität,

(4)     $(f/g)^* = f^* {\mskip 3mu} (g^*)^{-1} = (g^*)^{-1}{\mskip 3mu} f^*$, falls $g^*(z)\ne0$,     Quotientenbildung und Kommutativität,

(5)     $(1/g)^* = (g^*)^{-1}$, falls $g^*(z)\ne0$,     Inversenbildung.

Durch wiederholtes Anwenden von (1), (2) und (4) ergibt sich sofort

3. Corollar: Sei $f$ eine rationale Funktion ohne Pol in $\lambda$ und sei $f=p/q$ die vollständig gekürzte Darstellung, mit also teilerfremden Polynomen $p$ und $q$. Dann gilt

$$ f^*(\lambda) = p(J){\mskip 3mu} \left[q(J)\right]^{-1} = \left[q(J)\right]^{-1} p(J). $$

Aber auch für Potenzreihen rechnet man wie erwartet. Dies zeigt die

4. Folgerung: Sei $f(\lambda)=a_0+a_1z+\cdots{\mskip 3mu}$ eine Potenzreihe mit Konvergenzradius echt größer als $\left|\lambda\right|$. Dann gilt $ f^*(\lambda) = a_0I+a_1J+\cdots{\mskip 3mu}. $

Zu einer vorgegebenen festen quadratischen Matrix $A$ sei die (bis auf Permutation eindeutige) Jordannormalform $X^{-1}AX=\mathop{\rm diag}\left(J_1,\ldots,J_m\right)$ betrachtet. Hierbei ist $X$ (Matrix der Rechtsjordanketten) invertierbar. $J_i$ bezeichnet einen einfachen Jordanblock zum Eigenwert $\mu_i$, $i=1,\ldots,m$. Die $\mu_i$ müssen nicht notwendig verschieden sein. Ist $f$ eine analytische Funktion in der Umgebung von $\{\mu_1,\ldots,\mu_m\}$, so definiert man $f(A)$ durch

$$ X^{-1} f(A) X := \mathop{\rm diag}\left[f^*(\mu_1), \ldots, f^*(\mu_m)\right]. $$

Das Corollar und die Folgerung zeigen, daß $f(A)$ übereinstimmt mit dem, was man gängigerweise erwartet, zumindestens für rationale Funktionen und für Potenzreihen. Direkt aus der Definition folgt

5. Satz: Identitätssatz. Seien $\lambda_1,\ldots,\lambda_r$ die verschiedenen Eigenwerte von $A$. Die Funktionen $f$ und $g$ seinen analytisch in einer Umgebung von $\{\lambda_1,\ldots,\lambda_r\}$. Dann gilt Gleichheit $f(A)=g(A)$ genau dann, wenn die Ableitungen an den Eigenwerten bis zu entsprechender Ordnung übereinstimmen, also

$$ f^{(i)}(\lambda_k) = g^{(i)}(\lambda_k),\qquad i=0,\ldots,m_k-1,\quad k=1,\ldots,r, $$

wobei $m_k$ die Größe des größten Jordanblockes zum Eigenwert $\lambda_k$ bezeichnet.

Die oben als Definition für $f(A)$ benutzte Integralformel lässt sich nun, da Funktionen von Matrizen jetzt anders definiert wurden, auch beweisen.

6. Satz: Integraldarstellung für $f(A)$. Sei $\Gamma$ eine einfache geschlossene Kurve, die in ihrem Inneren die sämtlichen Eigenwerte von $A$ umschließt. Sei $f$ holomorph auf $\Gamma$ und im Inneren von $\Gamma$. Dann gilt

$$ f(A) = {1\over2\pi i} \int_\Gamma f(\tau) (I\tau-A)^{-1} d\tau. $$

Beweis: Wie üblich reduziert sich der Beweis auf die Betrachtung eines einzelnen Jordanblockes $J$ der Größe $n\times n$. Man rechnet

$$ \def\fracstrut{} \eqalignno{ f(J) &= f^*(\lambda_k) \qquad\hbox{(nach Corollar und Folgerung)}\cr &= {1\over2\pi i} \int_\Gamma f(\tau) \pmatrix{ \displaystyle{1\over\tau-\lambda_k} & \displaystyle{1\over(\tau-\lambda_k)^2} & \ldots & \displaystyle{1\over(\tau-\lambda_k)^n}\fracstrut\cr & \ddots & \ddots & \vdots\fracstrut\cr 0 & & \ddots & \displaystyle{1\over(\tau-\lambda_k)^2}\fracstrut\cr & & & \displaystyle{1\over\tau-\lambda_k}\fracstrut\cr} d\tau \cr &= {1\over2\pi i} \int_\Gamma f(\tau) (I\tau-J)^{-1} d\tau.\cr } $$

Beim Übergang von der ersten Zeile zur zweiten Zeile wurde benutzt %

$$ f^{(\nu)}(z) = {\nu!\over2\pi i} \int_\Gamma {f(\tau)\over(\tau-z)^{\nu+1}} d\tau, \qquad \nu=0,\ldots,k $$

und beim Übergang von der $2^{\rm ten}$ zur $3^{\rm ten}$, daß die Inverse von $I\tau-J$ halt so aussieht.     ☐

Der vorletzte Satz (Identitätssatz für Matrixfunktionen) zeigt, daß für eine feste Matrix $A$, die Matrixfunktion als Matrixpolynom darstellbar ist, da eine Übereinstimmung nur an endlich vielen Ableitungen gefordert ist. Sind die $m_k$ ($k=1,\ldots,r$) bekannt, so kann für eine feste Matrix $A\in\mathbb{C}^{n\times n}$ ein Ansatz der Form $g(\lambda) = a_{n-1}\lambda^{n-1} + \cdots + a_1\lambda + a_0$ gemacht werden, und man erhält die $a_i$ als Lösung einer Hermiteschen Interpolationsaufgabe. Sind alle Eigenwerte verschieden, also $m_k=1$ ($k=1,\ldots,r$), so liegt eine gewöhnliche Interpolationsaufgabe zugrunde. Die Lösung geschieht beispielsweise mit Newtonschen Differenzen oder der Lagrangeschen Formel, u.U. auch über die Cramersche Regel. Zu überprüfen ist, ob $f(\lambda)$ für die Eigenwerte $\lambda_1,\ldots,\lambda_r$ auch tatsächlich definiert ist. Probleme treten z.B. auf bei $f(\lambda)=\sqrt\lambda$, $f(\lambda)=\ln\lambda$, für $\lambda\notin\mathbb{R}^+$. Für $A\in\mathbb{C}^{1\times1}$ entartet die Aussage des Identitätssatzes in eine leere Aussage, nämlich $f=g\iff f=g$.

Einfache Folgerungen direkt aus der Definition von Matrizenfunktionen sind nun die folgenden Ergebnisse.

7. Satz: Satz von Cayley/Hamilton, 1.te Fassung, Cayley, Arthur (1821--1895), Hamilton, William Rowan (1805--1865). Das charakteristische Polynom $\chi(z)=\det(Iz-A)$ für $A\in\mathbb{C}^{n\times n}$ annulliert als Matrixpolynom aufgefaßt $A$, also es gilt $\chi(A)=0\in\mathbb{C}^{n\times n}$.

Beweis: Nach Charles A. McCarthy (1975): "The Cayley-Hamilton Theorem", The American Mathematical Monthly, April 1975, Vol 82, No 4, pp.390--391. Die Inverse von $Iz-A$ ist $\left[\det(Iz-A)\right]^{-1} M_{\mu\nu}(z)$, wobei $\mathop{\rm grad} M_{\mu\nu}\le n-1$. Die Integraldarstellung von $\chi(A)$ liefert

$$ \left.\chi(A)\right|_{\mu\nu} = {1\over2\pi i} \int_\Gamma \chi(z) (Iz-A)^{-1}{\mskip 3mu}dz = {1\over2\pi i} \int_\Gamma \det(Iz-A) \left[\det(Iz-A)\right]^{-1} M_{\mu\nu}(z){\mskip 3mu}dz = 0, $$

nach dem Cauchyschen Integralsatz ($\int_\Gamma f=0$, $f$ holomorph).     ☐

8. Definition: Zu einer Matrix $A\in\mathbb{C}^{n\times n}$ mit Eigenwerten $\lambda_1,\ldots,\lambda_r$ ($r\le n$) und Jordannormalform $A\sim\mathop{\rm diag}(J_1,\ldots,J_s)$ ($r\le s\le n$) heißt

$$ \hat\chi(z) = (z-\lambda_1)^{m_1}\cdot\ldots\cdot(z-\lambda_r)^{m_r} $$

das Minimalpolynom zu $A$. Hierbei ist $m_\nu$ die Ordnung des größten Jordanblocks zum Eigenwert $\lambda_\nu$. Zu $A={1{\mskip 3mu}0\choose0{\mskip 3mu}2}$ ist $\hat\chi(z)=(z-1)(z-2)$, zu $A=I$ ist $\hat\chi(z)=z-1$ unabhängig von $n$, und zu $A=\mathop{\rm diag}[{1{\mskip 3mu}1\choose0{\mskip 3mu}1},1,1,1]$ ist $\hat\chi(z)=(z-1)^2$ das Minimalpolynom.

9. Ähnliche Matrizen haben die gleiche Jordannormalform bis auf Umnumerierung von Jordanblöcken, daher das gleiche Minimalpolynom und auch das gleiche charakteristische Polynom. Offensichtlich verschwindet jeder Faktor $(J-\lambda_\nu I)^k$ ($\forall k\ge m_\nu$) für jeden Jordanblock $J$ zum Eigenwert $\lambda_\nu$, also $\hat\chi(A)=0\in\mathbb{C}^{n\times n}$, aber $(J-\lambda_\nu I)^k\ne0$ ($\forall k<m_\nu$). Damit ist $\hat\chi(z)$ ein Polynom minimalen Grades, welches $A$ annulliert. Aufgrund des führenden Leitkoeffizienten gleich 1 ist $\hat\chi(z)$ sogar eindeutig bestimmt. Da $\hat\chi$ stets Teiler von $\chi(z)=\det(Iz-A)$ ist, folgt

10. Satz: Satz von Cayley/Hamilton, 2.te Fassung, Cayley, Arthur (1821--1895), Hamilton, William Rowan (1805--1865). $\chi(A)=0\in\mathbb{C}^{n\times n}$. In Worten: Die Matrix $A$ annulliert ihr eigenes charakteristisches Polynom.

]]>
https://eklausmeier.goip.de/blog/2024/02-05-stetigkeit-der-eigenwerte-in-abhaengigkeit-der-matrixkomponenten https://eklausmeier.goip.de/blog/2024/02-05-stetigkeit-der-eigenwerte-in-abhaengigkeit-der-matrixkomponenten Stetigkeit der Eigenwerte in Abhängigkeit der Matrixkomponenten Mon, 05 Feb 2024 11:15:00 +0100 Die Eigenwerte einer Matrix hängen stetig von den Komponenten der Matrix ab. Dies soll hier bewiesen werden. Man kann sogar noch weitere Abhängigkeitssätze beweisen, jedoch werden die Begründungen dann länger, siehe das Buch von Gohberg/Lancaster/Rodman (1982), Autoren sind Gohberg, Izrael' TSudikovich, Lancaster, Peter und Rodman, Leiba.

1. Satz: Satz von Rouché, Rouché, Eugéne (1832--1910).

Voraussetzung: $f$ und $g$ seien meromorph; $Z_f,Z_g$ und $P_f,P_g$ seien die Anzahl der Nullstellen bzw. Pole von $f,g$ innerhalb $\Gamma$, entsprechend ihrer Vielfachheit.

Behauptung: Gilt $\mathopen|f+g\mathclose|<\mathopen|f\mathclose|+\mathopen|g\mathclose|<\infty$ auf $\Gamma$, so folgt $Z_f-P_f=Z_g-P_g$ innerhalb von $\Gamma$.

Beweis: Nach Conway, John B., Conway (1978) "Functions of One Complex Variable", Springer-Verlag, New York Heidelberg Berlin, Second Edition, 1978, xiii+317 S. und Irving Leonard Glicksberg: "A Remark on Rouché's Theorem", The American Mathematical Monthly, March 1976, Vol 83, No 3, pp.186--187.

Aufgrund der strikten Dreiecksungleichung haben $f$ und $g$ keine Pole oder Nullstellen auf $\Gamma$. Weiter ist also

$$ \left|{f(z)\over g(z)}+1\right| \lt \left|f(z)\over g(z)\right| + 1, \qquad\forall z\in\Gamma. $$

Die meromorphe Funktion $\lambda=f/g$ bildet $\Gamma$ auf $\Omega=\mathbb{C}\setminus\left[0,\infty\right[$ ab, da andernfalls für positive reelle $\lambda(z)$ gelten müsste $\lambda(z)+1<\lambda(z)+1$. Sei $\ell$ ein Zweig des Logarithmus auf $\Omega$. $\ell(f/g)$ ist eine Stammfunktion von $(f/g)^{-1}\cdot(f/g)'$. Somit

$$ 0 = {1\over2\pi i}\int_\Gamma (f/g)^{-1}\cdot(f/g)' = {1\over2\pi i}\int_\Gamma {f'\over f} - {g'\over g} = (Z_f-P_f) - (Z_g-P_g). $$

    ☐

Bei mehrfacher Umlaufung von $\Gamma$ ist die Aussage entsprechend zu modifizieren. Nach Glicksberg (1976) gilt der Sachverhalt allgemeiner in kommutativen, halbeinfachen Banachalgebren mit Einselement. Bekannter ist die schwächere Aussage: Aus $\mathopen|f+g\mathclose|<\mathopen|f\mathclose|<\infty$ auf $\Gamma$, folgt $Z_f=Z_g$ innerhalb $\Gamma$.

2. Beispiel: Für $p(z)=z^n+a_1z^{n-1}+\cdots+a_n$ gilt

$$ {p(z)\over z^n} = 1 + {a_1\over z} + \cdots + {a_n\over z^n} \longrightarrow 1 \quad(\mathopen|z\mathclose|\to\infty). $$

Also

$$ \left|{p(z)\over z^n}-1\right| \lt 1, \qquad\hbox{oder}\qquad \left|p(z)-z^n\right| \lt \left|z^n\right|, $$

für $\mathopen|z\mathclose|\ge R$, $R$ geeignet groß. Der Satz von Rouché sagt, daß die Polynome $p(z)$ und $z^n$ gleichviele Nullstellen innerhalb der Kreisscheibe mit Radius $R$ haben. Dies ist der Fundamentalsatz der Algebra.

Der nächste Satz besagt: Wenn sich die Koeffizienten zweier Polynome wenig unterscheiden, so differieren auch die Nullstellen nur wenig. Erinnert sei daran, daß eine Implikation wahr sein kann, falls die Prämisse falsch ist.

3. Satz: (Stetigkeit der Wurzeln von Polynomen) Voraussetzungen: Es seien $p(\lambda):=\lambda^n+a_{n-1}\lambda^{n-1}+\cdots+a_1\lambda+a_0$ und $q(\mu):=\mu^n+b_{n-1}\mu^{n-1}+\cdots+b_1\mu+b_0$ zwei komplexe Polynome mit den Nullstellen $\lambda_1,\ldots,\lambda_n$ für $p$ und $\mu_1,\ldots,\mu_n$ für $q$. Die Koeffizienten $a_i$ und $b_i$ sind beliebige komplexe Zahlen.

Behauptung: $\forall\varepsilon>0: \exists\delta>0:\mskip 5mu$ $\left|a_i-b_i\right|<\delta{\mskip 3mu}\Longrightarrow{\mskip 3mu}\left|\lambda_i-\mu_i\right|< \varepsilon$, bei geeigneter Numerierung der Nullstellen $\lambda_i$ und $\mu_i$.

Beweis: Nach Ortega, James McDonough, Ortega (1972): "Numerical Analysis---A Second Course", Academic Press, New York and London, 1972, xiii+201 S.

Es seien $\gamma_1,\ldots,\gamma_k$ ($k\ge1$) die verschiedenen Wurzeln von $p$. Sei $\varepsilon$ kleiner gewählt als der kleinste halbe Abstand zwischen allen verschiedenen Nullstellen, also

$$ 0\lt \varepsilon\lt {1\over2}\left|\gamma_i-\gamma_j\right|, \qquad \hbox{für}\quad i,j=1,\ldots,k \quad i\ne j. $$

Um $\gamma_i$ seien Scheiben $D_i$ mit Radius kleiner $\varepsilon$ gelegt, also

$$ D_i := \left\{z: \left|z-\gamma_i\right|\le\varepsilon\right\}, \qquad \hbox{für}\quad i=1,\ldots,k \quad (k\ge1) $$

$p$ verschwindet auf keiner der $k$ Scheibenränder, also $p(z)\ne0$, $\forall z\in\partial D_i$, $\forall i=1,\ldots,k$. Aufgrund der Stetigkeit von $p$ und der Kompaktheit der Ränder, nimmt $p$ jeweils das Minimum und Maximum an. Es gibt also Zahlen $m_i$ [$i=1,\ldots,k$, die Minima halt], sodaß

$$ \left|p(z)\right|\ge m_i, \qquad\hbox{für}\quad \forall z\in\partial D_i,{\mskip 3mu}\forall i=1,\ldots,k. $$

Weiter sei

$$ M_i := \max_{z\in\partial D_i} \left\{\left|z^{n-1}\right|+\cdots+\left|z\right|+1\right\} $$

das Maximum von Polynom“resten” auf den jeweiligen Scheibenrändern und sei nun $\delta$ so klein gewählt, daß

$$ \left|p(z) - q(z)\right| \le \delta M_i, \qquad\forall z\in\partial D_i, \quad i=1,\ldots,k. $$

Der obige Satz von Rouché ist nun anwendbar und sagt, daß $p$ und $q$ auf den vollen Scheiben die gleiche Anzahl von Nullstellen besitzen. M.a.W. die Nullstellen sind also nicht “weggelaufen”, sondern haben sich nur jeweils innerhalb der Scheiben bewegt.     ☐

Der Satz sagt nicht, daß die Wurzeln reell bleiben, sofern sie reell waren, bei Variation der Koeffizienten. Eine solche Aussage gilt so nicht. Hierzu bräuchte man stärkere Voraussetzungen.

4. Corollar: Die Eigenwerte einer Matrix hängen stetig von sämtlichen Matrixelementen ab.

Beweis: Die Eigenwerte der Matrix sind die Nullstellen des charakteristischen Polynomes. Die Koeffizienten des charakteristischen Polynoms hängen als Determinantenfunktion stetig von den Matrixelementen ab. Die Verkettung stetiger Funktionen ist wiederum stetig.     ☐

Das obige Corollar gilt nicht unbedingt für die Eigenvektoren.

5. Beispiel: Siehe Ortega (1972): Die Matrix nach J.W. Givens

$$ A(\varepsilon) := \pmatrix{ 1+\varepsilon\cos{2\over\varepsilon} & -\varepsilon\sin{2\over\varepsilon}\cr -\varepsilon\sin{2\over\varepsilon} & 1-\varepsilon\cos{2\over\varepsilon}\cr }, \qquad\quad\varepsilon\ne0, $$

hat die Eigenwerte $1\pm\varepsilon$ und die beiden Eigenvektoren

$$ \left(\sin{1\over\varepsilon},{\mskip 3mu}\cos{1\over\varepsilon}\right)^\top,\qquad\qquad \left(\cos{1\over\varepsilon}, -\sin{1\over\varepsilon}\right)^\top, $$

welche offensichtlich gegen keinerlei Grenzwert streben ($\varepsilon\to0$), jedoch $A(\varepsilon)\to{1{\mskip 3mu}0\choose 0{\mskip 3mu}1}$ und dies obwohl die Eigenräume jeweils eindimensional und gut separiert sind.

6. Folgerung: Der Nullstellengrad eines Polynomes ist lokal konstant.

Als ein Teilergebnis für Eigenvektoren erhält man

7. Satz: Voraussetzungen: Sei $\lambda$ ein einfacher Eigenwert von $A\in\mathbb{C}^{n\times n}$ und $x\ne0$ der zu $\lambda$ gehörige Eigenvektor. Weiter sei $E_\nu\in\mathbb{C}^{n\times n}$ beliebig aber derart, daß $\lambda(E_\nu)\to\lambda$, falls $E_\nu\to0$, wobei $\lambda(E_\nu)$ ein zu $A+E_\nu$ korrespondierender Eigenwert ist. Die $\left|E_\nu\right|$ seien so klein, daß $\lambda(E_\nu)$ ebenfalls einfacher Eigenwert ist und $A+E_\nu-\lambda(E_\nu)I$ den Rang $(n-1)$ hat, für alle $\nu$.

Behauptung: $\def\mapright#1{\mathop{\longrightarrow}\limits^{#1}}\displaystyle\lambda(E_\nu)\mapright{\nu\to\infty}\lambda$ und $\displaystyle x(E_\nu)\mapright{\nu\to\infty}x$, falls $\displaystyle E_\nu\to0$.

Beweis: Weil $\lambda$ einfacher Eigenwert ist, folgt durch Betrachtung einer Jordannormalform von $A$, daß $A-\lambda I$ den Rang $(n-1)$ hat. Somit gibt es Indizes $i$ und $j$, sodaß

$$ \sum_{m\ne j} \left(a_{km} - \lambda\delta_{km}\right) x_m = \left(a_{kj} - \lambda\delta_{kj}\right) x_j, \qquad k\ne i. $$

($\delta_{km}$ Kronecker-Delta) Die Koeffizienten Matrix vor $x_m$ ist invertierbar. Sei o.B.d.A. angenommen $x_j=1$,

$$ \begin{pmatrix} & & & j\downarrow & & & \cr & * & * & & & & \cr & * & * & & & & \cr k\rightarrow& & & \lambda & & & \cr & & & & * & * & *\cr & & & & * & * & *\cr & & & & * & * & *\cr \end{pmatrix} $$

Sei nun $\lambda(E_\nu)$ der Eigenwert von $A+E_\nu$, sodaß $\lambda(E_\nu)\to\lambda$, für $E_\nu\to0$; man beachte hier die stetige Abhängigkeit nach obigen Satz. Nach der Folgerung ist die Nullstellenordnung lokal konstant. Nun ist die Matrix $A+E-\lambda(E_\nu)I$ nach Streichen der $i$-ten Zeile und $j$-ten Spalte ebenfalls eine invertierbare $(n-1)\times(n-1)$ Matrix. Somit besitzt das lineare Gleichungssytem

$$ \sum_{m\ne j} \left(a_{km} - e_{km} - \lambda(E_\nu)\delta_{km}\right) x_m(E_\nu) = \left(a_{kj} + e_{kj} -\lambda(E_\nu)\delta_{kj}\right), \qquad k\ne i $$

genau eine Lösung $x_m(E_\nu)$ ($m\ne j$). Diese eindeutig bestimmte Lösung ist eine stetige Funktion in Abhängigkeit von $E_\nu$ (Cramersche Regel).     ☐

Wenn also die Folge der Matrizen $(E_\nu)$ so beschaffen ist, daß $A+E_\nu-\lambda_\nu I$ stets den Rang $(n-1)$ hat, so überträgt sich die stetige Abhängigkeit der Eigenwerte von den Matrixelementen auf eine stetige Abhängigkeit der Eigenvektoren von den Matrixelementen. Falls $(E_\nu)$ nicht der obigen Rangeinschränkung unterliegt, so liefert der Satz keine Information.

]]>
https://eklausmeier.goip.de/blog/2024/02-04-die-spur-einer-matrix https://eklausmeier.goip.de/blog/2024/02-04-die-spur-einer-matrix Die Spur einer Matrix Sun, 04 Feb 2024 11:00:00 +0100 1. Die Spur (engl./franz.: trace) einer Matrix $A\in\mathbb{C}^{n\times n}$ ist definiert zu $\def\tr{\mathop{\rm tr}}\tr A=a_{11}+\cdots+a_{nn}$, somit die Summe der Hauptdiagonalelemente. Durch elementare Rechnung zeigt man $\tr AB=\tr BA$, für zwei beliebige Matrizen $A\in\mathbb{C}^{n\times m}$, $B\in\mathbb{C}^{m\times n}$. $A$ und $B$ brauchen nicht zu kommutieren oder quadratisch sein. Insbesondere gilt $\def\adj#1{#1^*}\adj ab=\tr b\adj a$, für zwei beliebige Vektoren $a,b\in\mathbb{C}^n$.

$\tr\adj AB$ ist das Skalarprodukt für zwei quadratische Matrizen $A,B\in\mathbb{C}^{n\times n}$. Deswegen gilt: $\forall B:\tr\adj AB=0$ $\Rightarrow$ $A=0$ (Nichtausgeartetheit des Skalarproduktes/Anisotropie). Aus dem Rieszschen Darstellungssatz, Riesz, Friedrich (1880--1956), folgt die Äquivalenz: $g$ ist eine Linearform genau dann, wenn $\exists B:$ $g=\tr BA$ für alle $A$. Weiterhin gilt

2. Satz: Die folgenden beiden Aussagen sind äquivalent:

(1) $g\colon\mathbb{C}^{n\times n}\to\mathbb{C}$ ist (komplexes) Vielfaches der Spurfunktion.

(2) $g\colon\mathbb{C}^{n\times n}\to\mathbb{C}$ ist eine Linearform, also $g(\lambda A+\mu B)=\lambda g(A)+\mu g(B)$ und es gilt $g(AB)=g(BA)$, für alle $\lambda,\mu\in\mathbb{C}$ und alle $A,B\in\mathbb{C}^{n\times n}$.

Beweis: “(1)$\Rightarrow$(2)”: Dies sind einfache Rechenregeln für die Spurfunktion.

“(2)$\Rightarrow$(1)”: siehe Nicolas Bourbaki (1970)*1970+2A: "Éléments de mathématique: Algèebre", Hermann, Paris, 1970, 167+210+258S. = 635S. Für $n=1$ ist dies klar. Für $n\ge2$ sei $A=E_{ij}$ und $B=E_{jk}$ mit $i\ne k$. Hierbei ist $E_{\rho\tau}$ diejenige Matrix, welche an der Stelle $(\rho,\tau)$ eine 1 enthält und sonst nur Nullen. Für derartige Matrizen bestätigt man leicht $E_{ik} E_{j\ell} = 0$, falls $k\ne j$ und $E_{ik} E_{k\ell} = E_{i\ell}$. Damit gilt $g(E_{ik})=0$ $(i\ne k)$ und mit $A=E_{ij}$ und $B=E_{ji}$ ergibt sich $g(E_{ii})=g(E_{jj})$. Da die $E_{\rho\tau}$ eine Basis von $\mathbb{C}^{n\times n}$ bilden, folgt $g(A)=\lambda\tr A$ $\forall A$, mit geeignetem, festem $\lambda$.     ☐

Der Satz zeigt, daß es Linearformen auf der Algebra $\mathbb{C}^{n\times n}$, die gegenüber Vertauschungen invariant sind, nicht viele gibt. Durch Normierung, etwa $g(E_{11})=1$ oder $g(I)=n$, ist die Spurfunktion eindeutig bestimmt.

3. Lemma: $\forall C,D\in\mathbb{C}^{n\times n}$: $\mathop{\rm Re}\nolimits \tr CD\le{1\over2}\left(\tr C\adj C+\tr D\adj D\right)$.

Beweis: Siehe Sha, Hu-yun (1986): "Estimation of the Eigenvalues of $AB$ for $A>0$, $B>0$", Linear Algebra and Its Applications, Vol 73, January 1986, pp.147--150. Es ist $\mathop{\rm Re}\nolimits \tr CD=\mathop{\rm Re}\nolimits \sum_{i,k}c_{ik}d_{ki}={1\over2}\sum_{i,k}\bigl( c_{ik}d_{ki}+\overline{c_{ik}d_{ki}}\bigr)$, und weiter ist ${1\over2}\bigl(\tr C\adj C+\tr D\adj D\bigr)={1\over2}\sum_{i,k}\bigl( c_{ik}\overline{c_{ik}}+d_{ik}\overline{d_{ik}}\bigr)= {1\over2}\sum_{i,k}\bigl(c_{ik}\overline{c_{ik}}+d_{ki}\overline{d_{ki}}\bigr)$. In abkürzender Schreibweise sei $c_{ik}=e+fi$ und $d_{ki}=g+hi$. Damit hat man

$$ \eqalignno{ c_{ik}d_{ki}+\overline{c_{ik}d_{ki}} &= (e+fi)(g+hi)+(e-fi)(g-hi) = 2eg-2fh,\cr c_{ik}\overline{c_{ik}}+d_{ki}\overline{d_{ki}} &= (e+fi)(e-fi)+(g+hi)(g-hi) = e^2+f^2+g^2+h^2,\cr } $$

also $c_{ik}d_{ki}+\overline{c_{ik}d_{ki}} \ge c_{ik}\overline{c_{ik}}+d_{ki}\overline{d_{ki}}$, somit ${1\over2}\sum_{i,k}\left(c_{ik}d_{ki}+\overline{c_{ik}d_{ki}}\right) \ge {1\over2}\sum_{i,k}\left(c_{ik}\overline{c_{ik}}+d_{ki}\overline{d_{ki}}\right)$.     ☐

Ist eine hermitesche Matrix $A$ invertierbar, so ist die Inverse $A^{-1}$ ebenfalls hermitesch, da $AB=I=\adj B\adj A=\adj BA=A\adj B$, also $B=\adj B$, weil eine invertierbare Matrix stets mit seiner Inversen kommutiert. Genauso gilt: die Inverse eine normalen Matrix ist normal. ($A=UD\adj U\Rightarrow A^{-1}=(UD\adj A)^{-1}=(\adj U)^{-1} D^{-1} U^{-1} =UD^{-1}\adj U$.) Daraus ergibt sich sofort: die Inverse einer positiv definiten Matrix ist wieder positiv definit. Entsprechend ist die Inverse einer negativ definiten Matrix selbst wieder negativ definit. Es zeigt sich nun, daß das Produkt zweier positiv definiter Matrizen zumindestens wieder positve Eigenwerte besitzt.

4. Satz: Voraussetzungen: Es seien $A\succ0$, $B\succ0$ zwei positiv definite (hermitesche) Matrizen aus $\mathbb{C}^{n\times n}$ mit Eigenwerten $0<\mu_1\le\cdots\le\mu_n$ bzw. $0<\nu_1\le\cdots\le\nu_n$.

Behauptung: (1) $AB$ hat nur positive reelle Eigenwerte $0<\lambda_1\le\cdots\le\lambda_n$.

(2)     $\displaystyle{{2\over\sum_i\mu_i^{-2}+\sum_i\nu_i^{-2}} \le \tr AB \le {1\over2}\left(\sum_i\mu_i^2+\sum_i\nu_i^2\right)}.$

Da alle Eigenwerte $\lambda_i$ von $AB$ echt positiv sind, gilt insbesondere als Vergröberung

$$ {2\over n}{\mu_1^2 \nu_1^2 \over \mu_1^2 + \nu_1^2} \lt \lambda_i \lt {n\over2} \left(\mu_n^2 + \nu_n^2\right). $$

Beweis: Siehe Sha, Hu-yun (1986): Zu $A$ existiert $P$ mit $A=P\adj P$. Wegen $B\succ0$ also $P^{-1}B(\adj P)^{-1}\succ0$, daher existiert eine unitäre Matrix $U$, sodaß

$$ P^{-1}B(\adj P)^{-1}=U\mathop{\rm diag}(x_1,\ldots,x_n)\adj U, $$

mit entsprechenden Eigenwerten $x_i>0$. Nun ist

$$ \eqalign{ 0 \lt x_1+\cdots+x_n &= \tr P^{-1}B(\adj P)^{-1} \cr &=\tr(\adj P)^{-1}P^{-1}B \cr &= \tr AB\le{1\over2}\left(\tr A\adj A+\tr B\adj B\right) \cr & ={1\over2}\left( \sum_i\mu_i^2+\sum_i\nu_i^2\right). \cr } $$

Die $x_i$ sind die Eigenwerte von $AB$, da

$$ \eqalign{ \left|\lambda I-AB\right| &= \left|A\right| \left|\lambda A^{-1}-B\right| \cr &= \left|A\right| \bigl|\lambda P\adj P-PU\mathop{\rm diag}(x_1,\ldots,x_n)\adj{(PU)}\bigr| \cr &=\left|A\right| \left|PU\right| \left|\mathop{\rm diag}(\lambda-x_1,\ldots,\lambda-x_n)\right| \bigl|(PU)^\top\bigr|. \cr } $$

Nach dem selben Muster setzt man $B=Q\adj Q$, $Q^{-1}A^{-1}(\adj Q)^{-1}= V\mathop{\rm diag}(y_1,\ldots,y_n)\adj V$. Also

$$ \eqalign{ 0\lt y_1+\cdots+y_n &= \tr Q^{-1}A^{-1}(\adj Q)^{-1} \cr &= \tr(\adj Q)^{-1}Q^{-1}A^{-1}=\tr B^{-1}A^{-1}\le {1\over2}\tr A^{-1}\adj{(A^{-1})}+\tr B^{-1}\adj{(B^{-1})} \cr &= {1\over2}\left(\sum_i \mu_i^{-2} + \sum_i \nu_i^{-2}\right). } $$

Die $y_i$ sind zugleich Eigenwerte von $(AB)^{-1}$, denn

$$ \eqalign{ \left|\lambda I-AB\right| &= \left|A\right| {\mskip 3mu} \left|\lambda A^{-1}-B\right| \cr &= \left|A\right| {\mskip 3mu} \bigl|\lambda QV\mathop{\rm diag}(y_1,\ldots,y_n)\adj{(QV)} - Q\adj Q\bigr| \cr &= \left|A\right| {\mskip 3mu} \left|QV\right| {\mskip 3mu} \left|\mathop{\rm diag}(\lambda y_1-1,\ldots,\lambda y_n-1)\right| {\mskip 3mu} \bigl|\adj{(QV)}\bigr|. \cr } $$

    ☐

5. Beispiel: Für $A={1{\mskip 3mu}0\choose0{\mskip 3mu}3}$, $B={2,{\mskip 3mu}-1\choose-1,{\mskip 3mu}2}$, $AB={2,{\mskip 3mu}-1\choose-3,{\mskip 3mu}6}$ lauten die Eigenwerte $1\le3$, $3\le5$ und $3\le5$, insbesondere muß $AB$ nicht hermitesch sein.

]]>
https://eklausmeier.goip.de/blog/2024/02-03-hermitesche-unitaere-und-normale-matrizen https://eklausmeier.goip.de/blog/2024/02-03-hermitesche-unitaere-und-normale-matrizen Hermitesche, unitäre und normale Matrizen Sat, 03 Feb 2024 14:40:00 +0100 Hermitesche Matrizen $(\def\adj#1{#1^*}\adj A=A)$, unitäre Matrizen $(\adj A=A^{-1})$ und normale Matrizen $(\adj AA=A\adj A)$ lassen sich unitär diagonalisieren. Dies ist das zentrale Ergebnis dieses Abschnittes.

Während die Jordansche Normalform für jede komplexe Matrix eine Fast-Diagonalgestalt ermöglicht [genauer $(0,1)$-Bandmatrixform mit Eigenwerten als Diagonalelementen], so erlaubt das nachfolgende Lemma von Schur eine Triagonalgestalt, allerdings auf vollständig unitärer Basis. Genau wie die Jordansche Normalform, gilt die Schursche Normalform nicht für reelle Matrizen in reeller Form, falls das charakteristische Polynom über $\mathbb{R}$ nicht zerfällt. Es entstehen dann $(2\times2)$ reelle Blöcke. Doch spielen hier und im weiteren reelle Matrizen keine bedeutende Rolle.

1. Satz: Satz über eine Schursche Normalform, Schur, Issai (10.01.1875--10.01.1941). $\forall A\in\mathbb{C}^{n\times n}$: $\exists U$ unitär:

$$ \adj UAU= \pmatrix{\lambda_1&*&\ldots&*\cr &\lambda_2&&*\cr &&\ddots&\vdots\cr 0&&&\lambda_n}, $$

mit $\lambda_i$ Eigenwerte von $A$.

Beweis: Sei $\lambda_1$ Eigenwert von $A$ und $x_1$ normierter zugehöriger Eigenvektor $\def\iadj#1{#1^*}\iadj x1 x_1 = 1$. Es existieren linear unabhängige, paarweise unitäre (orthogonale) $y_2,\ldots,y_n\in\mathbb{C}^n$, sodaß $X_1:=(x_1,y_2,\ldots,y_n)$ unitär ist (Basisergänzungssatz, Schmidtsches Orthogonalisierungsverfahren). Schmidt, Erhard (1876--1959). Also $\iadj X1 X_1 = I$, somit $\iadj x1 y_i = 0$ $(i=2,\ldots,n)$, daher

$$ \iadj X1 A X_1 = \pmatrix{\iadj x1\cr \iadj y2\cr \vdots\cr \iadj yn\cr} (Ax_1, Ay_2, \ldots, Ay_n) = \pmatrix{\lambda_1&*&\ldots&*\cr 0&&&\cr \vdots&&A_1&\cr 0&&&\cr}. $$

$A_1\in\mathbb{C}^{(n-1)\times(n-1)}$ enthält außer $\lambda_1$, aufgrund der Ähnlichkeitstransformation, genau die gleichen Eigenwerte wie $A$. Man verfährt jetzt erneut wie oben: Zum Eigenwert $\lambda_2$ von $A_1$ (und auch $A$) gehört ein normierter Eigenvektor $x_2$, $A x_2 = \lambda_2 x_2$, mit $\iadj x2 x_2 = 1$. Man ergänzt wieder zu einem paarweise orthogonalen Vektorsystem $x_2,z_3,\ldots,z_n\in\mathbb{C}^{n-1}$, entsprechend

$$ X_2 := \pmatrix{1&0&0&\ldots&0\cr 0&x_2&z_3&\ldots&z_n\cr} \in \mathbb{C}^{n\times n} $$

und somit

$$ \iadj{X_2}\iadj{X_1}A X_1 X_2 = \pmatrix{ \lambda_1 & * & * & \ldots & *\cr 0 & \lambda_2 & * & \ldots & *\cr 0 & 0 &&&\cr \vdots & \vdots && A_3 &\cr 0 & 0 &&&\cr } . $$

Da unitäre Matrizen eine multiplikative, nicht-abelsche Gruppe (sogar kompakte Gruppe) bilden, insbesondere abgeschlossen sind, folgt nach nochmaliger $(n-2)$-facher Wiederholung die behauptete Darstellung.     ☐

Aus dem Lemma von Schur folgt übrigens sofort der Dimensionssatz

$$ A:\mathbb{C}^m\to\mathbb{C}^n, \qquad m = \dim\ker A + \dim\mathop{\rm Im} A, $$

wenn man bei nicht quadratischen Matrizen, $A$ zu einer quadratischen Matrix aus $\mathbb{C}^{(m\lor n)\times(m\lor n)}$ durch Nullauffüllung ergänzt.

2. $A$ heißt normal, falls $\adj AA=A\adj A$, also $A$ und $\adj A$ kommutieren. Beispielsweise sind hermitesche, schiefhermitesche und (komplexe) Vielfache unitärer Matrizen normal

$$ \adj A=A^{-1}{\mskip 5mu}\Rightarrow{\mskip 5mu}\adj AA=I=A\adj A. $$

“Kleine” und spezielle normale Matrizen lassen sich leicht klassifizieren, wie man durch elementare Rechnung leicht nachweist.

3. Lemma: (1) Normale $(2\times2)$ Matrizen sind entweder hermitesch oder komplexe Vielfache unitärer Matrizen.

(2) Eine Dreiecksmatrix ist genau dann normal, wenn sie eine Diagonalmatrix ist.

Die Art einer Diagonalisierbarkeit bestimmt eindeutig Normalität, Hermitizität und Unitärheit.

4. Satz: (1) $A$ normal $\iff$ $A$ unitär diagonalisierbar.

(2) $A$ hermitesch $\iff$ $A$ unitär reell-diagonalisierbar.

(3) $A$ schiefhermitesch $\iff$ $A$ unitär imaginär-diagonalisierbar.

(4) $A$ unitär $\iff$ $A$ unitär unimodular-diagonalisierbar.

Beweis: zu (1): “$\Rightarrow$”: Anwendung des vorstehenden Lemmas auf eine Schursche Normalform von $A$.

“$\Leftarrow$”: Mit $A=UD\adj U$, Diagonalmatrix $D$ und unitärem $U$ ($\adj UU=I$) rechnet man

$$ \eqalign{ \adj AA &= U\adj{(DU)}{\mskip 3mu}UD\adj U = U\overline DD\adj U,\cr A\adj A &= UD\adj U{\mskip 3mu}U\adj{(DU)} = UD\overline D\adj U.\cr } $$

zu (2): “$\Rightarrow$”: $A=\adj A$ $\Rightarrow$ $\adj xAx=\lambda\left<x,x\right> =\adj x\adj Ax=\adj{(Ax)}x=\overline\lambda\left<x,x\right>$, also $\lambda=\overline\lambda$.

“$\Leftarrow$”: $Ax_i=\lambda x_i=\overline\lambda x_i=\adj Ax_i$ $\forall i$, also stimmen $A$ und $\adj A$ auf einer Eigenbasis $x_1,\ldots,x_n$ überein, also $A=\adj A$ in jeder Basis.

zu (3): “$\Rightarrow$”: $A=-\adj A$, also $A\adj A=-A^2=\adj AA$, daher $A$ normal. $\adj xAx=\lambda \adj xx=-\adj x\adj Ax=-\overline\lambda \adj xx$, somit $\lambda=-\overline\lambda$, folglich $\lambda\in i\mathbb{P}$.

“$\Leftarrow$”: Mit $A=UD\adj U$ und Diagonalmatrix $D=-\adj D$ ist $-\adj A=-U\overline D\adj U=UD\adj U=A$.

zu (4): “$\Rightarrow$”: Wegen $\adj AA=I$ ist $A$ invertierbar. Für ein Eigenelement $(\lambda,x)$ zu $A$, also $Ax=\lambda x$, ergibt sich $\adj Ax=\overline\lambda x=A^{-1}x={1\over\lambda}x$, somit $\lambda\overline\lambda=1=\left|\lambda\right|$, für unitäre Matrizen $A$ sind sämtliche Eigenwerte daher unimodular.

“$\Leftarrow$”: Eine unimodulare Diagonalmatrix ist unitär. Unitäre Matrizen bilden eine (nicht-abelsche) Gruppe.     ☐

Wegen $AX=XD$ ist $X$ die Matrix der Rechtseigenvektoren und $X^{-1}$ wegen $X^{-1}A=DX^{-1}$ die Matrix der Linkseigenvektoren. Eine Umformulierung von (1) des Satzes ist: Das Minimalpolynom einer Matrix besteht genau dann nur aus einfachen Nullstellen, wenn die Matrix normal ist. Natürlich gilt nicht notwendig, daß diagonalähnliche Matrizen hermitesch, unitär oder normal sind, wie $B={1{\mskip 3mu}2\choose0{\mskip 3mu}3}$ zeigt ($BB^\top\ne B^\top B$). Ist $A$ hermitesch, so ist $\adj AA=A^2$ positiv semidefinit und positiv definit genau dann, wenn $A$ invertierbar ist, da alle Eigenwerte von $A^2$ nichtnegativ (bzw. positiv) sind. Der Rang einer schiefsymmetrischen Matrix ist wegen $\left|A\right|=(-1)^n\left|A\right|$, immer gerade. Dies hätte man auch mit Hilfe von (3) erkennen können, da die Determinante einer Diagonalmatrix das Produkt der Diagonalelemente ist.

Während die Schursche Normalform eine beliebige Matrix unitär zu triangulieren vermochte, so kann man sogar jede beliebige Matrix $A$ “unitär-diagonalisieren”, wenn man darauf verzichtet auf beiden Seiten der Matrix $A$ die gleiche unitäre Matrix $U$ bzw. $\adj U$ zu verlangen.

5. Proposition: $\forall A\in\mathbb{C}^{n\times n}$: $\exists U,V$ unitär: $A=UDV$, mit $D=\mathop{\rm diag}\sqrt{\lambda_i}$, mit $\lambda_i$ Eigenwerte von $\adj AA$.

Beweis: Die Matrix $\adj AA$ ist hermitesch, also $\adj AA = W\hat DW^\top$, mit unitärem $W$ und reeller Diagonalmatrix $\hat D=\mathop{\rm diag}\lambda_i$. Es ist

$$ \lambda_i = e_i^\top \hat D e_i = e_i \adj W \adj A AWe_i = \left\|AWe_i\right\|_2^2 \gt 0 . $$

Setze $D=\mathop{\rm diag}\sqrt{\lambda_i}$. Dann ist $D^{-1} \adj W \adj A AWD^{-1}=I$, also $U:=AWD^{-1}$ unitär. $V:=W^{-1}$ ist ebenfalls unitär und es gilt $UDV=AWD^{-1}DW^{-1}=A$.     ☐

Zur Notation siehe Das äußere Produkt und Determinanten.

Für positiv definite (hermitesche) Matrizen erkennt man auch gleich die Existenz einer beliebigen Wurzel, also $\root r \of A$. Insbesondere für eine reelle symmetrische Matrix $A$ mit lauter nicht-negativen Eigenwerten ($\Longleftrightarrow$ positiv semidefinit) gilt: $\exists Q:$ $QQ=A$. Ist $A$ nicht quadratisch, so kann man durch Ergänzen von Nullspalten oder Nullzeilen quadratische Form erreichen und man erhält

6. Satz: Singulärwertzerlegung. $\forall A\in\mathbb{C}^{m\times n}$: $\exists U\in\mathbb{C}^{m\times m}$ unitär, $V\in\mathbb{C}^{n\times n}$ unitär: $A=UDV$, mit $D\in\mathbb{C}^{m\times n}$: $D=\mathop{\rm row}(\mathop{\rm diag}\sqrt{\lambda_i},0) \lor D=\mathop{\rm col}(\mathop{\rm diag}\sqrt{\lambda_i},0)$, mit $\lambda_i$ Eigenwerte von $\adj AA$.

Die Quadratwurzeln der Eigenwerte von $\adj AA$ heißen singuläre Werte, die Zerlegung $A=UDV$ (w.o.) eine Singulärwertzerlegung. An ihr liest man die Pseudoinverse unmittelbar ab: $A^+=UD^+V$, wobei $D^+$ aus $D$ entsteht, indem man alle Nichtnull-Werte invertiert und die Nullen belässt.

7. Satz: Hurwitz-Kriterium, Adolf Hurwitz (1859--1919). Voraussetzungen: $A$ sei hermitesch und $A\succ0$, $A\succeq0$, $A\prec0$, $A\preceq0$ kennzeichne positive, positive Semi-, negative, negative Semidefinitheit. $r$ laufe stets von 1 bis $n$ und der Multiindex $i=(i_1,\ldots,i_r)$ sei stets in natürlicher Reihenfolge angeordnet, also $i_1<\cdots<i_r$. Genauso die Multiindizes $k$ und $\ell$.

Behauptung:

$$ \def\multisub#1#2{{\textstyle\mskip-3mu{\scriptstyle1\atop\scriptstyle#2_1}{\scriptstyle2\atop\scriptstyle#2_2}{\scriptstyle\ldots\atop\scriptstyle\ldots}{\scriptstyle#1\atop\scriptstyle#2_#1}}} \def\multisup#1#2{{\textstyle\mskip-3mu{\scriptstyle#2_1\atop\scriptstyle1}{\scriptstyle#2_2\atop\scriptstyle2}{\scriptstyle\ldots\atop\scriptstyle\ldots}{\scriptstyle#2_{#1}\atop\scriptstyle#1}}} \def\multisubsup#1#2#3{{\textstyle\mskip-3mu{\scriptstyle#3_1\atop\scriptstyle#2_1}{\scriptstyle#3_2\atop\scriptstyle#2_2}{\scriptstyle\ldots\atop\scriptstyle\ldots}{\scriptstyle#3_{#1}\atop\scriptstyle#2_{#1}}}} \displaylines{ A\succ0 \iff A_{1\ldots r}^{1\ldots r}\gt 0 \iff A_{r\ldots n}^{r\ldots n}\gt 0 \iff A\multisubsup rii\gt 0, \cr A\succeq0 \iff A\multisubsup rii\ge0, \cr A\prec0 \iff (-1)^r A_{1\ldots r}^{1\ldots r}\gt 0 \iff (-1)^{n-r} A_{r\ldots n}^{r\ldots n}\gt 0 \iff (-1)^r A\multisubsup rii\gt 0, \cr A\preceq0 \iff (-1)^r A\multisubsup rii\ge0. \cr } $$

Beweis: Sei $A=XDX^{-1}$ mit othogonalem $X$, also $X^{-1}=X^\top$, und $D$ sei die Diagonalmatrix der Eigenwerte. Es ist

$$ 1 = (XX^{-1})_i^i = \sum_\ell X_i^\ell (X^{-1})_\ell^i = \sum_\ell (X_i^\ell)^2, $$

also können nicht sämtliche $X_i^\ell$ verschwinden. Aus $A=XDX^{-1}=X(XD)^\top$ folgt

$$ A_i^i = \sum_\ell X_i^\ell (XD)_i^\ell = \sum_{k,\ell} X_i^\ell X_i^k D_k^\ell = \sum_\ell (X_i^\ell)^2 D_\ell^\ell, $$

da $D_k^\ell=0$, für $k\ne\ell$. An dieser Darstellung von $A_i^i$ als Summe von Quadraten liest man nun alles ab. Für den Fall einer hermiteschen Matrix führt man die Überlegungen genauso mit unitärem $X$ $(\adj X=X^{-1})$ unter Beachtung von $\overline{\det C}=\det\overline C$.     ☐

8. Bemerkung: Kann man eine Matrix nicht unitär diagonalisieren oder treten nicht-lineare Elementarteiler auf, so hat man nicht mehr die Darstellung als Summe von Quadraten (Summe von Beträgen) und man kann dann nicht mehr so einfach entscheiden, ob alle Eigenwerte positiv oder dergleichen sind. Beispielsweise für die Begleitmatrix zu $(\lambda-1)(\lambda-2)(\lambda-3)= \lambda^3-6\lambda^2+11\lambda-6$ verschwinden die ersten beiden Hauptminoren. Ist man nur an dem Vorzeichenverhalten einer Form $\adj xAx$ interessiert, so kann man das Hurwitz-Kriterium anwenden auf die hermitesche Matrix ${1\over2}(\adj A + A)$.

Es gilt zwar $A\succeq0{\mskip 5mu}\Rightarrow{\mskip 5mu}A_{1\ldots r}^{1\ldots r}\ge0 \land a_{ii}\ge0$, jedoch die Rückrichtung stimmt nicht, wie man erkennt anhand der Matrix

$$ A = \pmatrix{0&0&0&1\cr 0&0&0&0\cr 0&0&0&0\cr 1&0&0&0\cr}, $$

mit Eigenwerten 0 (zweifach), $(+1)$ und $(-1)$.

]]>
https://eklausmeier.goip.de/blog/2024/01-31-elementarsymmetrische-polynome https://eklausmeier.goip.de/blog/2024/01-31-elementarsymmetrische-polynome Elementarsymmetrische Polynome Wed, 31 Jan 2024 21:00:00 +0100 1. Definition: Ein Polynom $f(x_1,\ldots,x_n)$ in den Unbestimmten $x_1,\ldots,x_n$ heißt symmetrisch, falls das Polynom invariant bleibt unter jeder beliebigen zyklischen Vertauschung der Unbestimmten.

2. Beispiel: $f(x_1,x_2)=x_1^2+x_2^2$ oder $f(x_1,x_2)=x_1^3+x_2^3$ sind symmetrische Polynome, da Vertauschungen der Rollen von $x_1$ gegen $x_2$ nichts ändert.

3. Besonders wichtig sind die sogenannten elementarsymmetrischen Polynome

$$ \def\tr{\mathop{\rm tr}} \eqalignno{ s_1 &= x_1 + x_2 + \cdots + x_n,\cr s_2 &= x_1x_2 + x_1x_3 + \cdots + x_1x_n + \cdots + x_{n-1}x_n,\cr s_3 &= x_1x_2x_3 + x_1x_2x_4 + \cdots + x_{n-2}x_{n-1}x_n,\cr \vdots\: & \qquad\vdots\qquad\vdots\cr s_n &= x_1\ldots x_n.\cr } $$

Das Polynom $s_i$ heißt $i$-tes elementarsymmetrisches Polynom. Die $s_i$ üben gewisse Basisfunktionen im Raum der symmetrischen Polynome aus.

4. Satz: (Hauptsatz über elementarsymmetrische Polynome) Zu jedem symmetrischen $n$-stelligen Polynom $f(x_1,\ldots,x_n)$ existiert genau ein Polynom $F(x_1,\ldots,x_n)$, sodaß $f(x_1,\ldots,x_n)=F(s_1,\ldots,s_n)$, $\forall x_1,\ldots,x_n$.

Beweis: Siehe László Rédei (1967), Algebra I, László Rédei (15 November 1900 – 21 November 1980). Existenz: $f(x_1,\ldots,x_n)$ sei bzgl. der Potenzen lexikographisch geordnet und es sei $q=ax_1^{k_1}\ldots x_n^{k_n}$ der lexikographisch letzte Term, $k_1\ge\cdots\ge k_n$. Betrachtet man

$$ a s_1^{k_1-k_2} s_2^{k_2-k_3} \ldots{\mskip 3mu} s_{n-1}^{k_{n-1}-k_n} s_n^{k_n}, $$

so erkennt man, daß dieser Ausdruck als führenden Koeffizienten den Term

$$ a x_1^{k_1-k_2} (x_1x_2)^{k_2-k_3} \ldots (x_1\ldots x_{n-1})^{k_{n-1}-k_n} (x_1\ldots x_n)^{k_n} \tag{*} $$

besitzt, welcher offensichtlich gleich $q$ ist. Also enthält

$$ f_1(x_1,\ldots,x_n) := f(x_1,\ldots,x_n) - a s_1^{k_1-k_2} \ldots s_n^{k_n} $$

nur Terme, die lexikographisch vor $q$ kommen, man beachte $(*)$. $f_1(x_1,\ldots,x_n)$ ist symmetrisch und man wiederholt das Verfahren, welches irgendwann abbricht, da es nur endlich viele Terme der Form $b x_1^{\ell_1} \ldots x_n^{\ell_n}$ ($\ell_1\ge\cdots\ge\ell_n$) gibt.

Eindeutigkeit: Ist $f=F_1=F_2$, so ist $F_1-F_2$ identisch gleich Null, also das Nullpolynom.     ☐

5. Bemerkung: $f$ ist symmetrisch, $F$ ist i.d.R. nicht symmetrisch, wie $x_1^2+x-2^2=s_1^2-2s_2$, oder $x_1^3+x_2^3=s_1^3-3s_1s_2$ zeigen; $s_1=x_1+x_2$, $s_2=x_1x_2$. Die Symmetrie von $f$ verlagert sich also in die Symmetrie der Basen der Polynome.

6. Definition: Es seien $f(x)=a_0x^m+\cdots+a_m$ und $g(x)=b_0x^n+\cdots+b_n$ zwei Polynome. Dann nennt man die Determinante

$$ \def\abc{\phantom{\matrix{\imath_1\cr \imath_1\cr \imath_1\cr}}} R = \left|\matrix{ a_0 & \ldots & a_m\cr & \ddots & \ddots & \ddots\cr && a_0 & \ldots & a_m\cr b_0 & \ldots & b_n\cr & \ddots & \ddots & \ddots\cr && b_0 & \ldots & b_n\cr }\right| \eqalign{ \left.\abc\right\} & \hbox{$n$ Zeilen}\cr \left.\abc\right\} & \hbox{$m$ Zeilen}\cr } $$

die Resultante von $f(x)$ und $g(x)$, für $m,n\ge1$, $a_0\ne0$, $b_0\ne0$.

7. Es sei $u$ eine ^{gemeinsame Nullstelle}, also $f(u)=0$ und $g(u)=0$. Dann gilt

$$ \eqalign{ a_0u^{m+n-1} + \cdots + a_mu^{n-1}\qquad &= 0\cr \qquad\ddots\qquad\ddots\qquad\ddots\quad & \phantom{=0}\kern-1pt\vdots\cr \qquad\qquad a_0u^m + \cdots + a_m &= 0\cr b_0u^{n+m-1} + \cdots + b_nu^m\qquad &= 0\cr \qquad\ddots\qquad\ddots\qquad\ddots\quad & \phantom{=0}\kern-1pt\vdots\cr \qquad\qquad b_0u^n + \cdots + b_n &= 0\cr } $$

Dieses homogene Gleichungssystem hat den nicht-trivialen Lösungsvektor

$$ \left(u^{n+m-1}, u^{n+m-2}, \ldots, u^2, u, 1\right)^\top \in \mathbb{C}^{n+m} $$

Daher: falls eine gemeinsame Nullstelle $u$ vorliegt, so ist $R=0$. Es gilt sogar: wenn $R=0$, dann liegt eine gemeinsame Nullstelle vor.

8. Lemma: Es sei $d(x)=\mathop{\rm ggT}(f(x),g(x))$, wobei $\deg f(x)=m\ge1$, $\deg g(x)=n\ge1$. Dann gilt

$$ d(x)=\hbox{const}\iff f(x)g_1(x)+g(x)f_1(x)=0\quad\cases{ \deg f_1(x)\lt m,&$f_1(x)\ne0$,\cr \deg g_1(x)\lt n,&$g_1(x)\ne0$.\cr} $$

Beweis: “$\Rightarrow$”: Offensichtlich ist $f(x)=d(x)f_1(x)$ und $g(x)=-d(x)g_1(x)$ mit zwei Polynomen $f_1(x)$ und $g_1(x)$ mit allen oben gewünschten Eigenschaften.

“$\Leftarrow$”: siehe Rédei, L., Rédei (1967).     ☐

9. Satz: Für die Resultante $R$ gilt: $f(x)F(x)+g(x)G(x)=R$, wobei $\deg F(x)<n$, $\deg G(x)<m$.

Beweis: Addiere für $j=1,2,\ldots,m+n-1$ die $j$-te Spalte multipliziert mit $x^{m+n-j}$ zur letzten ($m$-ten) Spalte von $R$, welche zu

$$ \left(x^{n-1}f(x), \ldots, f(x), {\mskip 5mu} x^{m-1}g(x), \ldots, g(x)\right)^\top $$

wird. Entwickeln nach der letzten Spalte und dann Ausklammern von $f(x)$ bzw. $g(x)$, liefert die angegebene Darstellung.     ☐

Mit Hilfe des Lemmas folgt, daß $R$ genau dann verschwindet, falls $f(x)$ und $g(x)$ einen gemeinsamen Faktor besitzen.

10. Satz: Ist $f(x)=a_0(x-y_1)\ldots(x-y_m)$ und $g(x)=b_0(x-z_1)\ldots(x-z_n)$, $m,n\ge1$, so hat man für die Resultante die drei Darstellungen

$$ R = a_0^n b_0^m \prod_{1\le k,\ell\le n} %\prod_{\scriptstyle{1\le k\le n}\atop\scriptstyle{1\le\ell\le n}} (y_k-z_\ell) = a_0^n \prod_{1\le k\le m} g(y_k) = (-1)^{mn} b_0^m \prod_{1\le\ell\le n} f(z_\ell). $$

11. Die Newton-Identitäten. Newton, Sir Isaac (1643--1727), Urbain Le Verrier (1811--1877). Zur Matrix $A\in\mathbb{C}^{n\times n}$ mit den Eigenwerten $\lambda_i$, sei $p_k=\sum\lambda_i^k=\tr A^k$ und das charakteristische Polynom sei $f(x)=x^n+c_1x^{n-1}+\cdots+c_n=(x-\lambda_1)\ldots(x-\lambda_n)$. Nach der Produktregel ist

$$ f'(x) = {f(x)\over x-\lambda_1} + \cdots + {f(x)\over x-\lambda_n}, $$

und durch Polynomdivision verifiziert man

$$ f(x):(x-\lambda) = x^{n-1} + (\lambda+c_1)x^{n-2} + (\lambda^2+c_1\lambda+c_2)x^{n-3} + \cdots + (\lambda^{n-1}+c_1\lambda^{n-2}+\cdots+c_{n-1}). $$

Summation liefert $f'(x)=nx^{n-1}+(p_1+nc_1)x^{n-2}+(p_2+c_1p_1+nc_2)x^{n-3} +\cdots+(p_{n-1}+c_1p_{n-2}+\cdots+nc_{n-1})$. Koeffizientenvergleich mit $f'(x)=nx^{n-1}+(n-2)x^{n-2}+\cdots+2c_{n-2}+c_{n-1}$ ergibt

$$ \eqalignno{ &p_1 + c_1 = 0\cr &p_2 + c_1p_1 + 2c_2 = 0\cr &\qquad\vdots\qquad\qquad\ddots\cr &p_{n-1} + c_1p_{n-2} + \cdots + c_{n-2}p_1 + (n-1)c_{n-1} = 0\cr } $$

und $\lambda_1^k f(\lambda_1) + \cdots + \lambda_n^k f(\lambda_n) = 0$ ergibt

$$ p_{n+k} + c_1p_{n-1+k} + \cdots + c_{n-1}p_{1+k} + nc_n = 0, \qquad k=0,1,2,\ldots $$
]]>
https://eklausmeier.goip.de/blog/2024/01-29-aeusseres-produkt-und-determinanten https://eklausmeier.goip.de/blog/2024/01-29-aeusseres-produkt-und-determinanten Das äußere Produkt und Determinanten Tue, 30 Jan 2024 14:20:00 +0100 1. Das äußere Produkt

Es gibt eine Fülle von Möglichkeiten Determinanten einzuführen. Ein Weg ist, über das äußere Produkt zu gehen. Die folgenden Ausführungen erfolgen in enger Anlehnung an das Buch Matrizenrechnung von Wolfgang Gröbner (1966).

Es sei $K$ ein beliebiger Körper. Jeden Vektor eines $n$-dimensionalen Vektorraumes über $K$ kann man darstellen als Linearkombination der Basisvektoren (im weiteren Einheiten genannt)

$$ \eqalign{ a &= a_1\varepsilon_1+a_2\varepsilon_2+\cdots+a_n\varepsilon_n,\cr b &= b_1\varepsilon_1+b_2\varepsilon_2+\cdots+b_n\varepsilon_n,\cr } \qquad a_i, b_i\in K. $$

Das äußere Produkt (Zeichen $\land$) wird zunächst für die Einheiten erklärt:

$$ \varepsilon_i\land\varepsilon_k := \varepsilon_{ik} := \varepsilon_{ki} $$
$$ a\land b = \sum a_ib_k(\varepsilon_i\land\varepsilon_k) = \sum a_ib_k\varepsilon_{ik} = \sum_{i\lt k} (a_ib_k - a_kb_i)\varepsilon_{ik} $$
$$ a\land b=-(b\land a) $$

insbesondere

$$ \displaylines{ a\land a=0, \qquad (\lambda a)\land b = a\land(\lambda b) = \lambda\cdot(a\land b),\cr a\land(b+c) = (a\land b)+(a\land c), \qquad (b+c)\land a = (b\land a)+(c\land a).\cr } $$

Im $\mathbb{C}^3$ kann dem äußeren Produkt eine anschauliche Bedeutung beigelegt werden. Identifiziert man

$$ \varepsilon_{12}=\varepsilon_3, \quad \varepsilon_{23}=\varepsilon_1, \quad \varepsilon_{31}=\varepsilon_2, $$

liegen also die Einheiten höherer Stufe wieder im ursprünglichen Vektorraume, so gilt in diesem Falle für das äußere Produkt, welches man auch vektorielles Produkt nennt (Schreibweise: $a\times b$)

$$ a\land b = a\times b = (a_2b_3-a_3b_2)\varepsilon_1 +(a_3b_1-a_1b_3)\varepsilon_2+(a_1b_2-a_2b_1)\varepsilon_3. $$

Die Verallgemeinerung auf das äußere Produkt von Vektoren höherer Stufe geschieht nach der Regel

$$ \varepsilon_{i_1}\land\varepsilon_{i_2}\land\cdots\land\varepsilon_{i_k} := \varepsilon_{i_1i_2\ldots i_k}, $$

entsprechend

$$ \varepsilon_{i_1i_2\ldots i_k}\land\varepsilon_{j_1j_2\ldots j_\ell} = \varepsilon_{i_1}\land\varepsilon_{i_2}\land\cdots\land\varepsilon_{i_k} \: \land\: \varepsilon_{j_1}\land\varepsilon_{j_2}\land\cdots\land\varepsilon_{j_\ell}. $$

Unter einem Vektor $k$-ter Stufe versteht man allgemein eine Linearform in den $n\choose k$ Einheiten $k$-ter Stufe. Summe, Differenz und inneres Produkt solcher Vektoren sind nach den üblichen Regeln der Algebra erklärt. Man darf also Vektoren derselben Stufe beliebig mit Skalaren multiplizieren und addieren.

1. Satz: Sind $a_1,a_2,\ldots,a_k$ Vektoren 1-ter Stufe, so ändert sich das äußere Produkt nicht, wenn man zu einem dieser Vektoren, etwa $a_1$, ein lineares Kompositum der übrigen Vektoren addiert:

$$ a_1\land a_2\land\cdots\land a_k = (a_1+\lambda_2a_2+\cdots+\lambda_ka_k) \land a_2\land\cdots\land a_k,\qquad\forall\lambda_2,\ldots,\lambda_k\in K. $$

Der Beweis ergibt sich durch direktes Ausmultiplizieren der rechten Seite. Bis auf den ersten Summand verschwinden alle weiteren Summanden, da bei allen anderen Produkten, außer dem ersten, stets zwei gleiche Vektoren miteinander äußerlich multipliziert werden.

$ \def\multisub#1#2{{\textstyle\mskip-3mu{\scriptstyle1\atop\scriptstyle#2_1}{\scriptstyle2\atop\scriptstyle#2_2}{\scriptstyle\ldots\atop\scriptstyle\ldots}{\scriptstyle#1\atop\scriptstyle#2_#1}}} \def\multisup#1#2{{\textstyle\mskip-3mu{\scriptstyle#2_1\atop\scriptstyle1}{\scriptstyle#2_2\atop\scriptstyle2}{\scriptstyle\ldots\atop\scriptstyle\ldots}{\scriptstyle#2_{#1}\atop\scriptstyle#1}}} \def\multisubsup#1#2#3{{\textstyle\mskip-3mu{\scriptstyle#3_1\atop\scriptstyle#2_1}{\scriptstyle#3_2\atop\scriptstyle#2_2}{\scriptstyle\ldots\atop\scriptstyle\ldots}{\scriptstyle#3_{#1}\atop\scriptstyle#2_{#1}}}} \def\diag{\mathop{\rm diag}} \def\tridiag{\mathop{\rm tridiag}} \def\col{\mathop{\rm col}} \def\row{\mathop{\rm row}} \def\dcol{\mathop{\rm col\vphantom {dg}}} \def\drow{\mathop{\rm row\vphantom {dg}}} \def\rank{\mathop{\rm rank}} \def\grad{\mathop{\rm grad}} \def\adj#1{#1^*} \def\iadj#1{#1^*} \def\tr{\mathop{\rm tr}} \def\mapright#1{\mathop{\longrightarrow}\limits^{#1}} \def\fracstrut{} $

2. Definition einer Determinante

1. Während das Produkt von $k$ Vektoren erster Stufe insgesamt $n\choose k$ Komponenten hat, so hat insbesondere das Produkt von $n$ Vektoren nur noch eine Komponente. Diese Komponente heißt eine Determinante.

$$ a_1\land a_2\land\cdots\land a_n = \sum a_{1i_1}a_{2i_2}\ldots a_{ni_n} \varepsilon_{i_1}\land\varepsilon_{i_2}\land\cdots\varepsilon_{i_n}. $$

Alle Glieder, welche das Produkt von zwei Einheiten mit gleichem Index enthalten, verschwinden. Für die Determinante schreibt man

$$ \left|A\right|=\left|a_{ik}\right|=\sum\pm a_{1i_1}a_{2i_2}\cdots a_{ni_n}, $$

wobei $\pm=(-1)^{i_1+\ldots+i_n}$.

2. Beispiel: $n=2$: Es ist

$$ \left|\matrix{a_{11}&a_{12}\cr a_{21}&a_{22}\cr}\right| = a_{11}a_{22} - a_{21}a_{12}. $$

$n=3$: Hier berechnet man $\left|A\right|$ zu

$$ a_{11}a_{22}a_{33}+a_{12}a_{23}+a_{13}a_{21}a_{32} -a_{31}a_{22}a_{13}-a_{32}a_{23}a_{11}-a_{33}a_{21}a_{12}. $$

Aufgrund der hohen Anzahl der Summanden, nämlich $n!$ (jeder Summand ist $n$-faches Produkt), benutzt man zur eigentlichen Berechnung von Determinanten i.d.R. ab $n\ge3$ Determinantenregeln.

3. Mit Hilfe von Determinanten lassen sich auch die Produkte von weniger als $n$ Vektoren genauer ausschreiben. Das Produkt von

$$ a = a_1\varepsilon_1+a_2\varepsilon_2+\cdots+a_n\varepsilon_n,\qquad b = b_1\varepsilon_1+b_2\varepsilon_2+\cdots+b_n\varepsilon_n, $$

ist

$$ a\land b = \sum_{i\lt k} \left|a_i,b_k\right|\varepsilon_{ik}, $$

wo $\left|a_i,b_k\right|=\left|{a_i,a_k\atop b_i,b_k}\right|$ bedeutet. Für einen weiteren dritten Vektor

$$ c=c_1\varepsilon_1+c_2\varepsilon_2+\cdots+c_n\varepsilon_n, $$

gilt

$$ a\land b\land c=\sum_{i\lt j\lt k}\left|a_i,b_j,c_k\right|\varepsilon_{ijk}, $$

mit

$$ \left|a_i,b_j,c_k\right|=\left|\matrix{ a_i & a_j & a_k\cr b_i & b_j & b_k\cr c_i & c_j & c_k\cr }\right| $$

3. Eigenschaften einer Determinante

1. Bemerkung: Es gelten:

(1) Die Determinante einer quadratischen Matrix $A=(a_{ik})$

$$ \left|A\right|=\left|a_{ik}\right|=\sum\pm a_{1i_1}a_{2i_2}\cdots a_{ni_n}, $$

ist eine homogene, lineare Funktion der Elemente einer jeden Zeile und einer jeden Spalte.

(2) Eine Determinante ändert ihr Vorzeichen, wenn man zwei Zeilen oder zwei Spalten miteinander vertauscht.

(3) Eigenschaft (1) und (2) sind für eine Determinante charakteristisch. Bis auf eine Normierung durch einen Skalar, gibt es keine weiteren multilinearen, alternierenden Formen dieser Art.

2. Satz: Ist $\Phi\colon \mathop{\rm GL}(n,K)\rightarrow K^\times$ eine Abbildung mit $\Phi(AB)=\Phi(A)\Phi(B)$, für alle $A,B\in \mathop{\rm GL}(n,K)$, dann gibt es $\varphi\colon K^\times\rightarrow K^\times$, mit $\varphi(\alpha\beta)=\varphi(\alpha)\varphi(\beta)$, für alle $\alpha,\beta\in K^\times$ und es ist $\Phi(A)=\varphi(\det A)$, für alle $A\in \mathop{\rm GL}(n,K)$.

Beweis: siehe Max Koecher (1985), S.119.     ☐

3. Siehe Wolfgang Gröbner (1966). Es seien $A=(a_{ik})$ und $B=(b_{ik})$ zwei $n$-zeilige quadratische Matrizen, $C=(c_{ik})=AB$ sei die Produktmatrix. Es ist $c_{ik}=\sum a_{ij}b_{jk}$. Die Zeilenvektoren von $C$ sind

$$ c_i = \sum_k c_{ik}\varepsilon_k = \sum_{j,k} a_{ij}b_{jk}\varepsilon_k = \sum_j a_{ij}b_j, $$

wobei $b_j=\sum b_{jk}\varepsilon_k$ die Zeilenvektoren von $B$ bedeuten. Nun ist

$$ \eqalign{ c_1\land c_2\land\cdots\land c_n &= \left|C\right|\varepsilon_{12\ldots n}\cr &= (a_{11}b_1+a_{12}b_2+\cdots+a_{1n}b_n)\land\cdots\land (a_{n1}b_1+a_{n2}b_2+\cdots+a_{nn}b_n)\cr &= \left|A\right| b_1\land b_2\land\cdots\land b_n = \left|A\right| \left|B\right| \varepsilon_{12\ldots n}. } $$

Durch Vergleich der ersten und letzten Zeile sieht man $\left|C\right| = \left|A\right| \left|B\right|$, also $\left|AB\right| = \left|A\right| \left|B\right|$.

4. Der oben abgeleitete Determinantenproduktsatz, wie auch letztlich das kanonische Skalarprodukt, ist ein Spezialfall der Formel von Cauchy/Binet, auch Determinantenproduktsatz für rechteckige Matrizen genannt. Cauchy, Augustin Louis (1789--1857) Binet, Jacques Philipe Marie (1786--1856)

Es sei $A=(a_{ik})$ eine $m\times n$-Matrix, $B=(b_{k\ell})$ eine $n\times s$-Matrix. Ihr Produkt $AB=C=(c_{i\ell})$ ist eine $m\times s$-Matrix mit den Elementen

$$ c_{i\ell} = \sum_k a_{ik}b_{k\ell},\qquad i=1,\ldots,m,\quad \ell=1,\ldots,s. $$

Jeder Zeilenvektor $c_i$ von $C$ ist

$$ c_i = \sum_\ell c_{i\ell}\varepsilon_\ell = \sum_{k,\ell} a_{ik}b_{k\ell}\varepsilon_\ell = \sum_k a_{ik}b_k, $$

mit den Zeilenvektoren $b_k$ der Matrix $B$ zu $b_k = \sum_\ell b_{k\ell}\varepsilon_\ell$. Nun ist

$$ \eqalign{ c_1\land c_2\land\cdots\land c_m &= \sum_\ell C\multisup m\ell \varepsilon_{\ell_1\ell_2\ldots\ell_m}\cr &= \sum_k A\multisup mk (b_{k_1}\land b_{k_2}\land\cdots\land b_{k_m})\cr &= \sum_{k,\ell} A\multisup mk B\multisubsup mk\ell \varepsilon_{\ell_1\ell_2\ldots\ell_m},\cr } $$

Durch Vergleich der Koeffizienten vor $\varepsilon_{\ell_1\ell_2\ldots\ell_m}$ findet man

$$ \sum_{k,\ell} A\multisup mk B\multisubsup mk\ell = C\multisup m\ell % C_{12\ldots m}^{\ell_1\ell_2\ldots\ell_m}. $$

5. Diese Formel kann man noch etwas verallgemeinern, wenn man statt $c_1\land c_2\land\cdots\land c_m$ das äußere Produkt von irgend welchen $r$ Zeilenvektoren $c_{i_1}\land c_{i_2}\land\cdots\land c_{i_r}$ auf genau die gleiche Weise auswertet:

$$ \sum_{k,\ell} A\multisubsup rik B\multisubsup rk\ell = C\multisubsup mi\ell $$

In Worten: Jede $r$-zeilige Unterdeterminante der Produktmatrix ist darstellbar als Summe von Produkten $r$-reihiger Unterdeterminanten aus $A$ und $B$, die so kombiniert sind, daß jeweils die Spaltenindizes der ersten mit den Spaltenindizes der zweiten übereinstimmen, während die Zeilenindizes der ersten und die Spaltenindizes der zweiten mit den entsprechenden Indizes in der Produktmatrix übereinstimmen.

6. Man untersucht nun Spezialfälle der obigen Formel. Ist $r=m=s$ (also $C$ quadratisch), so hat man

$$ \left|C\right| = \sum_k A\multisup mk B\multisub mk $$

Ist $n<m$, so gilt $\left|C\right|=0$. Setzt man $B=A^\top$, dann ist einerseits

$$ c_{i\ell} = \sum_k a_{ik}a_{\ell k} = a_i\cdot a_\ell, $$

mit den Zeilenvektoren $a_i$ der Matrix $A$. Andererseits ist $B\multisub mk = A\multisup mk$ und zusammen mit

$$ a_{i_1}\land a_{i_2}\land\cdots\land a_{ir} = \sum_k A\multisubsup rik \varepsilon_{k_1k_2\ldots k_r}, $$

ergibt sich

$$ \left|AA^\top\right| = \left|a_i\cdot a_k\right| = \sum \left(A\multisup mk\right)^2 = \left|a_1\land a_2\land\cdots\land a_m\right|^2. \tag{*} $$

Eine Anwendung dieser Formel liefert mit $m=2$ und $A={a_1a_2\ldots a_m\choose b_1b_2\ldots b_m}$ die Formel von Lagrange, Lagrange, Joseph Louis (1736--1813)

$$ \left|a\times b\right| = \sum_{i\lt k} \left(a_ib_k-a_kb_i\right)^2 = \left(\sum a_i^2\right) \left(\sum b_k^2\right) - \left(\sum a_ib_k\right)^2 = \left|a\right|^2 \left|b\right|^2 - (ab)^2. $$

Sind die $a_1,a_2,\ldots,a_m$ paarweise othogonal, also

$$ a_i\cdot a_k = \cases{0,& für $i\ne k$,\cr \left|a_i\right|^2,& für $i=k$,} $$

so folgt unmittelbar aus $(*)$

$$ \left|a_1\land a_2\land\cdots\land a_m\right| = \left|a_1\right|\cdot\left|a_2\right|\ldots\left|a_m\right|. $$

In Worten: Der Betrag des äußeren Produktes von paarweise othogonalen Vektoren ist gleich dem Produkt ihrer Beträge. Dies ist die anschauliche Bedeutung des Spatproduktes. Das Volumen, welches von paarweise othogonalen Vektoren aufgespannt wird, ist gleich dem Produkt der Seitenlängen.

4. Der Laplacesche Entwicklungssatz

1. Siehe Wolfgang Gröbner (1966). Es seien $(i_1,\ldots,i_r)$ und $(i'_1,\ldots,i'_s)$ zueinander komplementäre Anordnungen, also $r+s=n$,

$$ i_1 \lt i_2 \lt \cdots \lt i_r, \qquad i'_1 \lt i'_2 \lt \cdots \lt i'_s, $$

und $(i_1,\ldots,i_r, i'_1,\ldots,i'_s)$ ist eine Permutation von $(1,2,\ldots,n)$, also $s=n-r$. Komplementär geordnete Anordnungen $(i_1,\ldots,i_r, i'_1,\ldots,i'_s)$ brauchen

$$ (i_1-1)+(i_2-2)+\cdots+(i_r-r) = i_1+i_2+\cdots+i_r - {r\over2}(r+1) $$

Transpositionen um die natürliche Anordnung $(1,\ldots,n)$ zu erreichen.

Durch Zusammenfassen von Zeilenvektoren von $A$ rechnet man

$$ \eqalignno{ \left|A\right| \varepsilon_{1\ldots n} &= a_1\land\cdots\land a_n \cr &= (-1)^p \left(a_{i_1}\land\cdots\land a_{i_r}\right) \land \left(a_{i'_1}\land\cdots\land a_{i'_{n-r}}\right) \cr &= (-1)^p \left(\sum_k A\multisubsup rik \varepsilon_{k_1\ldots k_r}\right) \land \left(\sum_k A\multisubsup {n-r}{i'}{k'} \varepsilon_{k'_1\ldots k'_{n-r}}\right) \cr &= \sum_k (-1)^{m+p} A\multisubsup rik A\multisubsup {n-r}{i'}{k'} \varepsilon_{1\ldots n}. \cr } $$

mit

$$ p = i_1+\cdots+i_r - {r\over2}(r+1),\qquad m = k_1+\cdots+i_r - {r\over2}(r+1), $$

und es wurde benutzt

$$ \varepsilon_{k_1\ldots k_r}\land\varepsilon_{k'_1\ldots k'_{n-r}} = \varepsilon_{k_1\ldots k_r k'_1\ldots k'_{n-r}} = (-1)^m \varepsilon_{1\ldots n}, $$

oder allgemeiner

$$ \varepsilon_{k_1\ldots k_r}\land\varepsilon_{\nu_1\ldots\nu_{n-r}} = \varepsilon_{k_1\ldots k_r\nu_1\ldots\nu_{n-r}} = \cases{ (-1)^m \varepsilon_{1\ldots n}, & falls $\nu_1=k'_1,\ldots,\nu_{n-r}=k'_{n-r}$,\cr 0, & sonst.\cr } $$

Zur Schreibvereinfachung definiert man das algebraische Komplement $\alpha\multisubsup rik$ zu

$$ \alpha\multisubsup rik := (-1)^{i_1+\cdots+i_r+k_1+\cdots+k_r} A\multisubsup r{i'}{k'}. $$

Statt algebraisches Komplement sagt man auch Adjunkte der Unterdeterminante $A\multisubsup rik$. Mit dieser Notation erhält man den

2. Satz: Allgemeiner Laplacescher Entwicklungssatz. Laplace, Pierre Simon (1749--1827). Man erhält den Wert der $n$-reihigen Determinante $\left|A\right|$, entwickelt nach nach den Zeilen $i_1,i_2,\ldots,i_r$, ($1\le i_1<i_2<\ldots<i_r\le n$), indem man alle $r$-reihigen Unterdeterminanten dieser $r$ Zeilen bildet, sie mit ihren algebraischen Komplementen multipliziert und dann addiert:

$$ \eqalignno{ \left|A\right| &= \sum_k A\multisubsup rik \alpha\multisubsup rik,\cr \left|A\right| &= \sum_i A\multisubsup rik \alpha\multisubsup rik.\cr } $$

Die Summen sind über alle $n\choose r$ Kombination $(k_1,\ldots,k_r)$, bzw. $(i_1,\ldots,i_r)$ zu erstrecken.

3. Nach dem oben hergeleiteten gilt offensichtlich leicht allgemeiner

$$ \sum_k A\multisubsup rik \alpha\multisubsup r\ell k = \cases{ \left|A\right|, & falls $i_\nu=\ell_\nu$,\cr 0, & sonst.\cr } $$

Für $r=1$ erhält man das übliche Entwickeln nach einer Zeile oder Spalte insbesondere

$$ \pmatrix{a_{11} & \ldots & a_{1n}\cr \vdots & \ddots & \vdots\cr a_{n1} & \ldots & a_{nn}\cr} \pmatrix{\alpha_1^1 & \ldots & \alpha_n^1\cr \vdots & \ddots & \vdots\cr \alpha_1^n & \ldots & \alpha_n^n\cr} = \pmatrix{\left|A\right| && 0\cr &\ddots&\cr 0&&\left|A\right|\cr}. $$

Wie üblich für $\xi_i^j$: $i$ Zeilenindex, $j$ Spaltenindex, für $(\alpha)$ also transponierte Matrix. Damit liegt eine explizite Beschreibung der inversen Matrix vor, also $\alpha_i^j / \left|A\right|$ für das $(j,i)$-Element der Inversen.

4. Satz: (Minor Inverser) Es sei $B=A^{-1}$, wobei $A$ invertierbar sei. Jeden Minor der Inversen kann man ausdrücken durch die Adjunkte der Ursprungsmatrix:

$$ B\multisubsup rik = {\alpha\multisubsup rik\over\left|A\right|} = {(-1)^m\over\left|A\right|} A\multisubsup {n-r}{i'}{k'}, \qquad m = i_1+\cdots+i_r + k_1+\cdots+k_r. $$

Beweis: Nach Cauchy/Binet ist

$$ \sum_k A\multisubsup rik B\multisubsup rk\ell = \cases{ 1, & falls $i_\nu=\ell_\nu$ $\forall\nu$,\cr 0, & sonst.\cr} \tag{*} $$

Nach dem Laplaceschen Entwicklungssatz ist

$$ \sum_k A\multisubsup rik \alpha\multisubsup r\ell k = \cases{ \left|A\right|, & falls $i_\nu=\ell_\nu$ $\forall\nu$,\cr 0, & sonst.\cr} $$

Es sind $(A\multisubsup rik)_k$ und $(\alpha\multisubsup rk\ell)_k$ beides Matrizen mit ${n\choose r}={n\choose n-r}$ Zeilen und Spalten. Nach $(*)$ ist $(B\multisubsup rk\ell)_k$ offensichtlich Inverse, genauso aber auch $\alpha\multisubsup rik / \left|A\right|$. Da Inversen eindeutig bestimmt sind, folgt Gleichheit.     ☐

5. Beispiel: Sowohl für Cauchy/Binet, Laplaceschen Entwicklungssatz als auch Minoren Inverser. Es seien

$$ A = \pmatrix{ 13 & 14 & 6 & 4\cr 8 & -1 & 13 & 9\cr 6 & 7 & 3 & 2\cr 9 & 5 & 16 & 11\cr }, \qquad A^{-1} = \pmatrix{ 1 & 0 & -2 & 0\cr -5 & 1 & 11 & -1\cr 287 & -67 & -630 & 65\cr -416 & 97 & 913 & -94\cr }. $$

(1) Die Determinante von $A$ berechnet man z.B. so:

$$ \left|A\right| = A_{12}^{12} A_{34}^{34} - A_{12}^{13} A_{34}^{24} + A_{12}^{14} A_{34}^{23} + A_{12}^{23} A_{34}^{14} - A_{12}^{24} A_{34}^{13} + A_{12}^{34} A_{34}^{12} = 1. $$

Hierbei muß nicht wie bei dem Laplaceschen Entwicklungssatz nach einer Zeile (oder Spalte) immer ein Vorzeichenwechsel von einem Term zum nächsten stattfinden.

(2) Es ist $AB=:C=I$. Also nach Cauchy/Binet wie oben $4\choose2$ Summanden

$$ C_{23}^{34} = \left|\matrix{0&0\cr 1&0\cr}\right| = A_{23}^{12} B_{12}^{34} + A_{23}^{13} B_{13}^{34} + A_{23}^{14} B_{14}^{23} + A_{23}^{23} B_{23}^{34} + A_{23}^{24} B_{24}^{34} + A_{23}^{34} B_{34}^{34} = 0. $$

(3) Für den Minor $B_{12}^{34}$ der Inversen $B$ rechnet man

$$ B_{12}^{34} = \left|\matrix{-2&0\cr 11&-1\cr}\right| = {(-1)^{10}\over1} A_{12}^{34} = \left|\matrix{6&4\cr 13&9\cr}\right| = 2, $$

genauso

$$ B_{23}^{24} = \left|\matrix{1&-1\cr 67&65\cr}\right| = (-1)^{11} A_{13}^{14} = -\left|\matrix{13&4\cr 6&2\cr}\right| = -2. $$

5. Weitere Folgerungen aus dem Satz von Cauchy/Binet

Aufgrund seiner großen Bedeutung sei für den Determinantenmultiplikationssatz von Cauchy/Binet ein weiterer Beweis angegeben, der nicht Bezug nimmt auf das äußere Produkt.

1. Satz: (Satz von Cauchy/Binet) Es sei $C=AB$. Dann gilt $C_{1\ldots r}^{1\ldots r} = \sum_i A\multisup ri B\multisub ri$.

Beweis: (für Cauchy/Binet) siehe Gantmacher, Felix R. (1908--1964), Gantmacher (1986). Man rechnet

$$ \eqalignno{ \left|\matrix{ c_{11} & \ldots & c_{1r}\cr \vdots & \ddots & \vdots\cr c_{r1} & \ldots & c_{rr}\cr }\right| &= \left|\matrix{ \sum_{i_1=1}^n a_{1i_1}b_{i_11} & \ldots & \sum_{i_r=1}^n a_{1i_r}b_{i_rr}\cr \vdots & \ddots & \vdots\cr \sum_{i_1=1}^n a_{ri_1}b_{i_11} & \ldots & \sum_{i_r=1}^n a_{ri_r}b_{i_rr}\cr }\right| &\cr &= \sum_{i_1,\ldots,i_r=1}^n \left|\matrix{ a_{1i_1}b_{i_11} & \ldots & a_{1i_r}b_{i_rr}\cr \vdots & \ddots & \vdots\cr a_{ri_1}b_{i_11} & \ldots & a_{ri_r}b_{i_rr}\cr }\right| &\cr &= \sum_{i_1,\ldots,i_r=1}^n A\multisup ri b_{i_11}\ldots b_{i_rr}. &\cr } $$

Unter allen $n^r$ Summanden sind nur $n(n-1)\ldots(n-r+1)={n\choose r}r!$ Summanden von Interesse, bei denen die Minoren $A\multisup ri$ nicht zwei, drei, $\ldots$, $r$ gleiche Spalten enthalten. Von den ${n\choose r}r!$ sind aber wiederum nur $n\choose r$ echt verschieden, die restlichen sind nichts anderes als Vertauschungen zweier Spalten. Also rechnet man weiter

$$ \eqalignno{ &\phantom{{}={}} \sum_{1\le i_1\lt \cdots\lt i_r\le n} \: \sum_{(\nu_1,\ldots,\nu_r)\in{\rm Perm}(i_1,\ldots,i_r)} \sigma(\nu_1,\ldots,\nu_r) A\multisup ri b_{\nu_11}\ldots b_{\nu_rr} \cr &= \sum_{1\le i_1\lt \cdots\lt i_r\le n} A\multisup ri \sum \sigma(\nu_1,\ldots,\nu_r) b_{\nu_11}\ldots b_{\nu_rr} \cr &= \sum_{1\le i_1\lt \cdots\lt i_r\le n} A\multisup ri B\multisub ri . \cr } $$

    ☐

2. Der Satz von Cauchy/Binet liest sich für mehr als zwei Matrizen wie folgt

$$ \eqalignno{ (AB)_i^j &= \sum_k A_i^k B_k^j, \cr (ABC)_i^j &= \sum_{k,\ell} A_i^k B_k^\ell C_\ell^j, \cr (ABCD)_i^j &= \sum_{k,\ell,m} A_i^k B_k^\ell C_\ell^m D_m^j, \cr (ABCDE)_i^j &= \sum_{k,\ell,m,p} A_i^k B_k^\ell C_\ell^m D_m^p E_p^j. \cr } $$

3. Es sei

$$ {\cal A}_p := (A\multisubsup pik)_{i_1\lt \cdots\lt i_p,{\mskip 3mu}k_1\lt \cdots\lt k_p} \in \mathbb{C}[\textstyle{{n\choose p}\times{n\choose p}}] $$

die ^{$p$-te assoziierte Matrix} zu $A$.

Die Anordnungen seien in lexikographischer Reihenfolge durchlaufen. Beispielsweise erhält man für eine $4\times4$ Matrix $A$ die $6\times6$ Matrix

$$ {\cal A}_6 = \pmatrix{ A_{12}^{12} & A_{12}^{13} & \ldots & A_{12}^{34}\cr \vdots & \vdots & \ddots & \vdots\cr A_{34}^{12} & A_{34}^{13} & \ldots & A_{34}^{34}\cr } $$

Eine Umformulierung des Satzes von Cauchy/Binet ist: Aus $C=AB$ folgt ${\cal C}_p = {\cal A}_p {\cal B}_p$, $p=1,2,\ldots,n$. Insbesondere: Aus $B=A^{-1}$ folgt ${\cal B}_p = {\cal A}_p^{-1}$, $p=1,2,\ldots,n$.

4. Satz: Es sei $A=(a_{ij})_{i,j=1}^n$ und

$$ \left|A-\lambda I\right| = (-\lambda)^n + c_{n-1}(-\lambda)^{n-1} + c_{n-2}(-\lambda)^{n-2} + \cdots + c_1(-\lambda) + c_0. $$

Dann gilt

$$ c_{n-1} = \sum_{1\le i\le n} a_{ii}, \qquad c_{n-2} = \sum_{1\le i_1\lt i_2\le n} A_{i_1i_2}^{i_1i_2}, \qquad c_{n-3} = \sum_{1\le i_1\lt i_2\lt i_3\le n} A_{i_1i_2i_3}^{i_1i_2i_3}, \quad \ldots,\quad c_0 = A_{1\ldots n}^{1\ldots n}=\left|A\right|. $$

Beweis: Siehe Felix Ruvimovich Gantmacher (1908--1964), Gantmacher, 1986, "Matrizentheorie", §3.7. Die Potenz $(-\lambda)^{n-p}$ tritt in denjenigen Termen von $\left|A-\lambda I\right|$ auf, die

$$ a_{k_1k_1}-\lambda, {\mskip 5mu} a_{k_2k_2}-\lambda, {\mskip 5mu} \ldots, {\mskip 5mu} a_{k_{n-p}k_{n-p}}-\lambda, \qquad k_1\lt \cdots\lt k_{n-p} $$

enthalten. Anwendung des allgemeinen Laplaceschen Entwicklungssatzes entwickelt nach $(k_1,\ldots,k_{n-p})$ liefert

$$ \left|A-\lambda I\right| = (a_{k_1k_1}-\lambda) (a_{k_2k_2}-\lambda) \ldots (a_{k_{n-p}k_{n-p}}-\lambda) A_{i_1\ldots i_p}^{i_1\ldots i_p} + \hbox{Rest}, $$

wobei $(i_1,\ldots,i_p)$ die zu $(k_1,\ldots,k_{n-p})$ komplementäre Anordnung ist, also $\{k_1,\ldots,k_{n-p},{\mskip 3mu}i_1,\ldots,i_p\} = \{1,\ldots,n\}$. Bildet man alle möglichen ${n\choose n-p}={n\choose p}$ Kombinationen von $n-p$ Elementen $k_1<\cdots<k_{n-p}$, die besagte Diagonalelemente enthalten, so erhält man genau $n\choose p$ Minoren als Summe, die die Koeffizienten vor $(-\lambda)^{n-p}$ ausmachen.     ☐

5. Beispiel: zu $c_{n-k}=\sum_i A\multisubsup kii$ im Falle $n=3$. Für

$$ \left|\matrix{ a_{11}-\lambda & a_{12} & a_{13}\cr a_{21} & a_{22}-\lambda & a_{23}\cr a_{31} & a_{32} & a_{33}-\lambda\cr }\right| $$

erhält man

$$ \eqalignno{ &\phantom{=} (-\lambda)^3 + (a_{11}+a_{22}+a_{33})\lambda^2 + (a_{11}a_{22}-a_{21}a_{12} a_{11}a_{33}-a_{31}a_{13}+a_{22}a_{33}-a_{32}a_{23})(-\lambda) + \left|A\right| &\cr &= (-\lambda)^3 + (A_1^1+A_2^2+A_3^3)\lambda^2 + (A_{12}^{12}+A_{13}^{13}+A_{23}^{23})(-\lambda) + \left|A\right|. &\cr } $$

6. Eine direkte Folge ist der Vietascher Wurzelsatz. Vieta (siehe Viète), Fran\c cois Viéte (1540--1603). Entweder benutzt man eine Jordansche Normalform ($A=XJX^{-1}$) oder eine Schursche Normalform ($A=UT\adj U$). Das charakteristische Polynom bleibt bei einer Ähnlichkeitstransformation invariant, daher

$$ c_{n-k} = \sum_{i_1\lt \cdots\lt i_k} \lambda_{i_1}\ldots\lambda_{i_k} = \sum_{i_1\lt \cdots\lt i_k} A\multisubsup kii. $$

Es wird nicht behauptet, daß i.a. $\lambda_{i_1}\ldots\lambda_{i_k}=A\multisubsup kii$. Beispielsweise für eine invertierbare Begleitmatrix $C_1\in\mathbb{C}^{n\times n}$ gilt $\lambda_1\ldots\lambda_k\ne (C_1)_{1\ldots k}^{1\ldots k}=0$, für $k<n$.

7. Satz: Bei zwei diagonalähnlichen Matrizen $A,B\in\mathbb{C}^{n\times n}$ mögen sämtliche Eigenvektoren gleich sein. Dann gilt: $AB=BA$, d.h. $A$ und $B$ kommutieren.

Beweis: $X$ enthalte sämtliche Eigenvektoren, $D_1=\mathop{\rm diag}\lambda_i$, $D_2=\mathop{\rm diag}\mu_i$, $A=XD_1X^{-1}$, $B=XD_2X^{-1}$. Also $AB=XD_1X^{-1}XD_2X^{-1}=XD_1D_2X^{-1}=XD_2X^{-1}XD_1X^{-1}=BA$.     ☐

8. Satz: Es gelte $AB=BA$. Dann gilt: $A$ und $B$ haben gemeinsame Eigenvektoren.

Beweis: Siehe James H. Wilkinson (1919--1986), Wilkinson (1965) "The Algebraic Eigenvalue Problem", siehe Gantmacher, Felix R. (1908--1964), Gantmacher (1986) "Matrizentheorie", §9.10. Für ein beliebiges Eigenelement $(\lambda,x)$ von $A$ gilt $AB^kx=\lambda B^kx$, $k=0,1,2,\ldots$ In der Vektorfolge $x$, $Bx$, $B^2x$, $\ldots$ seien die ersten $p$ Vektoren linear unabhängig, also der $(p+1)$-te Vektor $B^px$ ist eine Linearkombination der $p$ vorhergehenden. Der Unterraum ${\cal S}:=\left<x,Bx,\ldots,B^{p-1}x\right>$ ist bzgl. $B$ invariant, also $B{\cal S}\subseteq\cal S$, daher existiert ein Eigenvektor $y\in\cal S$ für $B|\cal S$, damit auch für $B$. $AB^kx=\lambda B^kx$ zeigt, daß $x$, $Bx$, $B^2x$, $\ldots$ Eigenvektoren zum selben Eigenwert $\lambda$ sind. Insbesondere jede Linearkombination dieser Vektoren ist Eigenvektor von $A$, also auch $y\in\cal S$.     ☐

9. Bemerkung: Beim Beweis war wesentlich, daß $B$ einen Eigenvektor besitzt. Bei komplexen Matrizen ist dies aufgrund des Fundamentalsatzes der Algebra klar. Bei reellen Matrizen (über $\mathbb{R}$) braucht kein reeller Eigenwert zu existieren und somit auch kein Eigenvektor. Die Drehungsmatrix $T={\cos\alpha{\mskip 3mu}-\sin\alpha\choose\sin\alpha{\mskip 3mu}\cos\alpha}$ hat für geeignetes $\alpha$ keinen reellen Eigenwert. Anschaulich ist dies ersichtlich, weil nicht jede Drehung streckt, staucht oder Fixpunkte hat. Algebraisch ist dies ersichtlich, weil $\det(A-\lambda I)= \lambda^2-2\lambda\cos\alpha+1=(\lambda-\cos\alpha)^2+(1-\cos^2\alpha)$ nicht für jedes $\alpha$ über $\mathbb{R}$ zerfällt. Sehr wohl hat $T$ jedoch in $\mathbb{C}$ die beiden Eigenwerte $\lambda=\pm i\sin\alpha$. Der Satz bleibt richtig, wenn man im Reellen zusätzlich fordert, daß $B$ nur reelle Eigenwerte hat, z.B. falls $B$ hermitesch ist. Der Satz bleibt auch richtig, wenn man voraussetzt: $A$ und $B$ enthalten $1\times1$ Jordanblöcke (lineare Elementarteiler).

10. Satz: Es sei $A\in\mathbb{C}^{m\times n}$ und $B\in\mathbb{C}^{n\times m}$. Sind beide Matrizen quadratisch ($m=n$) so haben $AB$ und $BA$ dasselbe charakteristische Polynom und damit die gleichen Eigenwerte samt Multiplizitäten. Im Falle $m\ne n$ haben $AB$ und $BA$ die gleichen Eigenwerte samt Multiplizitäten außer, daß das Produkt der höheren Ordnung $\left|m-n\right|$ zusätzliche Nullen im Spektrum hat.

Beweis: siehe Wilkinson, J.H., Wilkinson (1965). Es ist

$$ \left|\matrix{ I&0\cr -B&\mu I\cr }\right| \left|\matrix{ \mu I&A\cr B&\mu I\cr }\right| = \left|\matrix{ \mu I&A\cr 0&\mu^2I-BA\cr }\right| $$

und

$$ \left|\matrix{ \mu I&-A\cr 0&I\cr }\right| \underbrace{ \left|\matrix{\mu I&A\cr B&\mu I\cr}\right| }_{{}=:\alpha} = \left|\matrix{ \mu^2I-AB&0\cr B&\mu I\cr }\right|. $$

Also

$$ \mu^n \alpha = \mu^n \left|\mu^2I-BA\right| = \mu^n \left|\mu^2I-AB\right|. $$

Für $\mu=0$ beachte man $\left|AB\right|=\left|BA\right|$. Der Fall $m\ne n$ wird genauso bewiesen.     ☐

Den Beweis hätte man auch direkt über die Koeffizienten des charakteristischen Polynomes führen können. Nämlich mit

$$ \eqalignno{ \left|AB-\lambda I\right| &= (-\lambda)^n+c_{n-1}(-\lambda)^{n-1}+ \cdots+c_1(-\lambda)+c_0,\cr \left|BA-\lambda I\right| &= (-\lambda)^n+d_{n-1}(-\lambda)^{n-1}+ \cdots+d_1(-\lambda)+d_0,\cr } $$

berechnet man die $c_i$ und $d_i$ zu

$$ c_{n-k} = \sum_i (AB)\multisubsup kii = \sum_{i,\ell} A_i^\ell B_\ell^i, \qquad d_{n-k} = \sum_i (BA)_i^i = \sum_{i,\ell} B_i^\ell A_\ell^i. $$

Vertauschung von $i$ und $\ell$ in einer der beiden Summen zeigt Gleichheit, einmal abgesehen von möglichen “Stellenverschiebungen”. Also $c_{n-k+\ell}=d_{n-k}$, was aber gerade Multiplikation des charakteristischen Polynomes mit $\lambda^\ell$ bedeutet.

]]>
https://eklausmeier.goip.de/blog/2024/01-24-lines-of-code-of-various-open-source-projects https://eklausmeier.goip.de/blog/2024/01-24-lines-of-code-of-various-open-source-projects Lines of Code of various Open-Source Projects Wed, 24 Jan 2024 20:10:00 +0100 As of today the following open-source projects have the below lines of code (LOC).

Name LOC in million
Linux kernel 34.987
Chrome 30.992
PHP 1.814
Apache HTTP Server 1.659
WordPress 1.157
Slurm 0.844
Git 0.580
X server 0.511
bash 0.249
Zola 0.022
Simplified Saaze 0.002
]]>
https://eklausmeier.goip.de/blog/2024/01-23-matrixpolynome https://eklausmeier.goip.de/blog/2024/01-23-matrixpolynome Matrixpolynome Tue, 23 Jan 2024 19:45:00 +0100 Matrixpolynome (oder gelegentlich auch $\lambda$-Matrizen genannt) sind Polynome, bei denen die Koeffizienten Matrizen sind, quadratisch oder rechteckig, dies ist vorerst gleichgültig. Also

$$ L(\lambda) = A_\ell\lambda^\ell + A_{\ell-1}\lambda^{\ell-1} + \cdots + A_1\lambda + A_0, \qquad A_\ell,A_{\ell-1},\ldots,A_1,A_0\in\mathbb{C}^{m\times n}. $$

Für den Fall $\ell=1$ gilt häufig $L(\lambda)=I\lambda-A$.

1. Vektorräume und lineare Abbildungen

1. Definition: (1) Ein Vektor $a_1$ heißt linear-abhängig von den Vektoren $a_2,\ldots,a_n$ genau dann, wenn $a_1$ lineares Komposituum dieser $(n-1)$ Vektoren ist, also

$$ a_1 = \lambda_2a_2 + \cdots + \lambda_na_n, \qquad \lambda_2,\ldots,\lambda_n\in\mathbb{C}. $$

In Zeichen: $a_1{\mathrel{\underline\perp}}(a_2,\ldots,a_n)$. Die $n$ Vektoren $a_1,\ldots,a_n$ heißen dann ebenfalls linear-abhängig, in Zeichen ${\mathrel{\underline\perp}}(a_1,\ldots,a_n)$.

(2) $a_1$ ist von $a_2,\ldots,a_n$ linear-unabhängig genau dann, wenn $a_1$ von $a_2,\ldots,a_n$ nicht linear-abhängig ist, also $a_1$ nicht als lineares Komposituum der anderen $(n-1)$ Vektoren darstellbar ist. In Zeichen $a_1{\mathrel{\underline{\not\perp}}}(a_2,\ldots,a_n)$.

(3) Ist $a_1{\mathrel{\underline{\not\perp}}} a_2,\ldots,a_n$ linear-unabhängig, $a_2{\mathrel{\underline{\not\perp}}} a_1,a_3,\ldots,a_n$, $\ldots$, $a_n{\mathrel{\underline{\not\perp}}} a_1,\ldots,a_{n-1}$, so heißt die Vektorfamilie $(a_1,\ldots,a_n)$ linear-unabhängig (schlechthin), in Zeichen ${\mathrel{\underline{\not\perp}}}(a_1,\ldots,a_n)$.

Ist $a_1$ von $a_2,\ldots,a_n$ linear-abhängig, so ist $a_1$ in gewisser Hinsicht überflüssig, da $a_1$ ja aus den anderen Vektoren zusammengesetzt werden kann. Liegen $a_2,\ldots,a_n$ in einer Ebene, so liegt damit natürlich auch $a_1$ in der gleichen Ebene. Man beachte, daß eine (zweistellige) Relation zwischen einem Vektor und $(n-1)$ anderen Vektoren definiert wurde und eine Eigenschaft zwischen $n$ Vektoren, also eine $n$-stellige Relation.

2. Definition und Eigenschaften von Standard-Tripeln

Gegeben sei das monische Matrixpolynom

$$ L(\lambda)=\sum_{i=0}^\ell A_i\lambda^i, \qquad A_\ell=I,\quad A_i\in\mathbb{C}^{n\times n}. $$

Die Vektorfamilie $x_0,\ldots,x_k$, mit $x_0\ne\bf0$, $x_i\in\mathbb{C}^{n\times1}$, heißt rechte Jordan-Kette (oder auch rechte Keldysh-Kette), Keldysh, M.V., der Länge $(k+1)$ für das Matrixpolynom $L(\lambda)$ zum Eigenwert $\lambda_0$ genau dann, wenn

$$ \pmatrix{ L(\lambda_0) & & & \llap{0}\cr L'(\lambda_0) & L(\lambda_0) & & \cr \vdots & \vdots & \ddots & \cr {1\over k!}L^{(k)}(\lambda_0) & {1\over(k-1)!}L^{(k-1)}(\lambda_0) & \ldots & L(\lambda_0)\cr} \pmatrix{x_0\cr x_1\cr \vdots\cr x_k\cr} = \pmatrix{0\cr 0\cr \vdots\cr 0\cr}. $$

Die hierbei links auftretende Matrix $\mathbb{P}$ ist natürlich nicht invertierbar, weil $L(\lambda_0)$ nicht invertierbar ist.

Die Vektorfamilie $y_0,\ldots,y_k$, mit $y_0\ne\bf0^\top$, $y_i\in\mathbb{C}^{1\times n}$, heißt linke Jordan-Kette der Länge $(k+1)$ für das Matrixpolynom $L(\lambda)$ zum Eigenwert $\lambda_0$ genau dann, wenn

$$ (y_0,\,\ldots,\,y_n)\cdot\mathbb{P}=(0^\top,\,\ldots,\,0^\top), $$

d.h. also, wenn $y_0^\top,\ldots,y_k^\top$ eine rechte Jordan-Kette ist.

Das Paar von Matrizen von Matrizen $(X,T)$, mit $X$ von der Größe $n\times n\ell$ und $T$ der Größe $n\ell\times n\ell$, heißt Standard-Paar genau dann, wenn gilt:

  1. $\mathop{\rm col}(XT^i)_{i=0}^{\ell-1}$ ist invertierbar,
  2. $\sum_{i=0}^\ell A_iXT^i=\bf 0$.

Ist $T$ eine Jordan-Matrix, so heißt das Paar $(X,T)$ auch Jordan-Paar.

Das Matrizentripel $(X,T,Y)$, mit $X$ der Größe $n\times n\ell$, $T$ der Größe $n\ell\times n\ell$ und $Y$ der Größe $n\ell\times n$, heißt Standard-Tripel des Matrixpolynoms $L(\lambda)$ genau dann, wenn gilt:

  1. $(X,T)$ ist Standard-Paar,
$$ Y = \pmatrix{X\cr XT\cr \vdots\cr XT^{\ell-1}\cr}^{-1} \pmatrix{0\cr \vdots\cr 0\cr I\cr}. $$

Ist $T$ wiederum eine Jordan-Matrix, so heißt $(X,T,Y)$ auch Jordan-Tripel.

Ist $(X,T,Y)$ Jordan-Tripel, dann sind die Spalten von $X$ rechte Jordanketten (Keldysh-Ketten), Keldysh, M.V., von $L(\lambda)$, falls $X$ derart in Blöcke aufgespalten wird, sodaß diese konsistent mit der Unterteilung der Jordan-Matrix $J$ sind. Hierzu dual sind die Zeilen von $Y$ Links-Jordan-Ketten zu $L(\lambda)$. Zusammenfassend entnimmt man die nötigen Dimensionen der Matrizen $X$, $T$ und $Y$ dem Schema

$$ \left(X, T, Y\right): \qquad \eqalign{X\colon{}&n\times n\ell\cr T\colon{}&n\ell\times n\ell\cr Y\colon{}&n\ell\times n\cr} \qquad \eqalign{X\colon{}&\mathbb{C}^{n\ell}\rightarrow\mathbb{C}^n\cr T\colon{}&\mathbb{C}^{n\ell}\rightarrow\mathbb{C}^{n\ell}\cr Y\colon{}&\mathbb{C}^n\rightarrow\mathbb{C}^{n\ell}\cr} \qquad \eqalign{X\colon{}&\mathbb{R}^\ell\rightarrow\mathbb{R}\cr T\colon{}&\mathbb{R}^\ell\rightarrow\mathbb{R}^\ell\cr Y\colon{}&\mathbb{R}\rightarrow\mathbb{R}^\ell\cr} $$

Ist $(X,T,Y)$ Standard-Tripel, so gilt

$$ XT^iY=\cases{0,&für $i=0,\ldots,\ell-2$\cr I,&für $i=\ell-1$.\cr} $$

1. Äquivalente Charakterisierungen für Standard-Tripel. Es gelten die folgenden Eigenschaften. Das Matrizentripel $(X,T,Y)$ ist genau dann Standard-Tripel, wenn für die Inverse des Matrixpolynomes $L(\lambda)$ die Darstellung gilt

$$ L^{-1}(\lambda) = X (I\lambda-T)^{-1} Y, \qquad\lambda\notin\sigma(L). $$

$L^{-1}(\lambda)$ kann man auffassen als Übertragungsfunktion des linearen Systems

$$ {d{\bf x}\over dt} = T{\bf x}+Y{\bf x},\qquad y=X{\bf x},\quad{\bf x}(0)=0. $$

Weiterhin gilt

$$ {1\over2\pi i}\int_\Gamma f(\lambda)L^{-1}(\lambda)d\lambda = X f(T) Y, $$

wobei $\Gamma$ eine rektifizierbare Kurve ist, sodaß $\sigma(L)$ innerhalb von $\Gamma$ liegt, und $f$ ist eine holomorphe Funktion innerhalb von $\Gamma$ und innerhalb einer Umgebung von $\Gamma$.

2. Linearisierungen. Das Matrixpolynom $I\mu-A$ der Größe $(n+p)\times(n+p)$ ist eine Linearisierung des Matrixpolynomes $L(\mu)$ der Größe $\ell\times\ell$ und des Grades $n$ genau dann, wenn

$$ I\mu-A\sim\pmatrix{L(\mu) & 0\cr 0 & I\cr}. $$

Zwei Matrixpolynome $M_1(\mu)$ und $M_2(\mu)$ sind äquivalent, also $M_1(\mu)\sim M_2(\mu)$, genau dann, wenn

$$ M_1(\mu) = E(\mu) M_2(\mu) F(\mu), \qquad\forall\mu\in\mathbb{C}, $$

mit Matrixpolynomen $E(\mu)$ und $F(\mu)$, mit nicht verschwindender konstanter Determinante. Offensichtlich muß $n+p=n\ell$ sein. Zwei Linearisierungen sind stets zueinander ähnlich. Jede zu einer Linearisierung ähnliche Matrix, ist ebenfalls eine Linearisierung. Nebenläufig sei darauf hingewiesen, daß bei quadratischen Matrizen, jede Matrix zu ihrer Transponierten ähnlich ist. Weiter gilt nun der

3. Satz: Ist eine Matrix $T\in\mathbb{C}^{m\times m}$ gegeben, so ist $T$ genau dann eine Linearisierung eines monisches Matrixpolynoms vom Grade $\ell$ und der Größe $n\times n$, wenn die beiden folgenden Bedingungen erfüllt sind:

  1. $m=n\ell$ und
  2. $\displaystyle\max_{\lambda\in\mathbb{C}}\dim\ker(I\lambda-T)\le n$.

Den Beweis führt man auf den Smith'schen Normalformensatz zurück. Zum Beweise dieser und anderer hier relevanter Tatsachen, sei auf das Buch von Gohberg/Lancaster/Rodman (1982) hingewiesen, wo auch weiterführende Literaturstellen zu diesem Thema angegeben werden. Autoren sind Gohberg, Izrael' TSudikovich, Lancaster, Peter und Rodman, Leiba.

4. Matrixdifferenzengleichungen und Standard-Tripel. Bei linearen Mehrschrittverfahren der Form

$$ \alpha_0y_n+\alpha_1y_{n+1}+\cdots+\alpha_ky_{n+k} = h\left(\beta_0f_n+\beta_1f_{n+1}+\cdots+\beta_kf_{n+k}\right), \qquad\alpha_k\ne0, $$

tauchen in natürlicher Form skalare Differenzengleichungen auf. Bei zyklischen, linearen Verfahren, wie z.B. der Form

$$ \begin{align} -2y_{3m-2}&+&9y_{3m-1}&-&18y_{3m}&+&11y_{3m+1}&&&&% &=&6h\dot y_{3m+1},\cr &-&2y_{3m-1}&+&9y_{3m}&-&18y_{3m+1}&+&11y_{3m+2}&&% &=&&&6h\dot y_{3m+2},\cr &&&&&&9y_{3m+1}&-&12y_{3m+2}&+&3y_{3m+3}% &=&h\bigl(-4\dot y_{3m+1}&-&4\dot y_{3m+2}+2\dot y_{3m+3}\bigr).\cr \end{align} $$

tauchen Matrixdifferenzengleichungen der Form

$$ u_{\ell+r}+A_{\ell-1}u_{\ell-1+r}+\cdots+A_1u_{1+r}+A_0u_r = f_r, \qquad r=0,1,\ldots $$

in ebenso natürlicher Weise auf. Gelegentlich ist es von Vorteil, eine Darstellung für die Lösung der Differenzengleichung zu haben, welche deutlich macht, wie sämtlich bisher berechneten Werte für nachfolgende Werte eingehen.

5. Satz: Es gilt für die Lösung der Matrixdifferenzengleichung

$$ Iu_{\ell+r}+\sum_{i=0}^{\ell-1}A_iu_{i+r}=f_r,\qquad r=0,1,\ldots, $$

die Darstellung der Lösung zu

$$ u_{m+1}=XT^{m+1}c+X\sum_{i=0}^m T^{m-i}Yf_i,\qquad m=0,1,\ldots, $$

wobei $(X,T,Y)$ Standard-Tripel ist zum Matrixpolynom

$$ L(\lambda)=I\lambda^\ell+\sum_{i=0}^{\ell-1}A_i\lambda^i. $$

Der Vektor $c\in\mathbb{C}^{n\ell}$ ist durch Vorgabe der Startwerte

$$ u_r=a_r,\qquad r=0,\ldots,\ell-1 $$

eindeutig bestimmt und gegeben durch

$$ c = \pmatrix{Y,&TY,&\ldots,&T^{\ell-1}Y}\pmatrix{ A_1 & A_2 & \ldots & I\cr A_2 & \vdots & \unicode{x22F0} & 0\cr \vdots & I & & \vdots\cr I & 0 & \ldots & 0\cr} \pmatrix{a_0\cr a_1\cr \vdots\cr a_{\ell-1}\cr} = \left(\mathop{\rm col}_{i=0}^{\ell-1} XT^i\right)^{-1}\mathop{\rm col}_{\nu=0}^{\ell-1} a_\nu. $$

Setzt man $R=\mathop{\rm row}_{i=0}^{\ell-1}T^iY$, $Q=\mathop{\rm col}_{i=0}^{\ell-1}XT^i$, so ist $RBQ=I$ und $c=RBa=Q^{-1}a$.

]]>
https://eklausmeier.goip.de/blog/2024/01-20-member-of-250kb-club https://eklausmeier.goip.de/blog/2024/01-20-member-of-250kb-club Member of 250KB club Sat, 20 Jan 2024 19:10:00 +0100 I am now a member of the 250KB club. See "Proud member":

eklausmeier.goip.de

Proud member of the exclusive 250KB Club!

Added: 2024-01-19 | Last updated: 2024-01-19

eklausmeier.goip.de is a member of the exclusive 250KB Club. The page weighs only 78kb and has a content-to-bloat ratio of 13%.

They are now entitled to add one of those shiny badges to your page. But don't forget, even though I tried to make them as small as possibe, a badge will add some kilobytes to your page weight. A code snipped can be found by clicking on the respective badge.




While the overall size of 78kb, compressed size, is OK, the bloat ratio of 13% is not so good. I.e., 87% is effectively bloat. In my case the major contributing factors are:

  1. Google fonts, no fault on Google
  2. JavaScript for Pagefind for having instant search

For example, the post Moved Blog To eklausmeier.goip.de measured with tools.pingdom.com loads in 244ms from Frankfurt and needs 8 requests.

The distribution among content type is as below.

Again, 90% is fonts, script, and CSS, i.e., bloat. Without losing any information, but with losing appearance and slickness I could spare 80%!

Looking at the waterfall diagram one can see that dropping fonts would not lead to any significant faster website. This is because Google is pretty fast serving all those fonts. Similarly, Pagefind's processing can be seen overlapping the other processing, so not adding much waiting.

Though I am also a little guilty in the overall website obesity crisis.

Most of the talk about web performance is similarly technical, involving compression, asynchronous loading, sequencing assets, batching HTTP requests, pipelining, and minification.

All of it obscures a simpler solution.

If you're only going to the corner store, ride a bicycle.

If you're only displaying five sentences of text, use vanilla HTML. Hell, serve a textfile! Then you won't need compression hacks, integral signs, or elaborate Gantt charts of what assets load in what order.

Browsers are really, really good at rendering vanilla HTML.

We have the technology.

Being a member of the 250KB club is not very surprising as I am already a member of the 512KB club, in particular their "green team", i.e., the team with websites smaller than 100kB uncompressed.

]]>
https://eklausmeier.goip.de/blog/2024/01-14-performance-comparison-of-lemire-website-wordpress-vs-simplified-saaze https://eklausmeier.goip.de/blog/2024/01-14-performance-comparison-of-lemire-website-wordpress-vs-simplified-saaze Performance Comparison of Lemire Website: WordPress vs. Simplified Saaze Sun, 14 Jan 2024 20:00:00 +0100 In the previous post Example Theme for Simplified Saaze: Lemire I demonstrated the transition from a website using WordPress to Simplified Saaze. This very blog also uses Simplified Saaze. This post shows how much better performance-wise this transition was. The comparison is therefore between:

  1. Original: WordPress version, lemire.me
  2. Modified: Simplified Saaze version of Lemire

The original website is hosted by SiteGround and Cloudflare. It uses WordPress.

1. Comparison. For the comparison I use the website tools.pingdom.com, which provides various metrics to evaluate the performance of a website:

  1. Page size
  2. Number of requests
  3. Load time
  4. Concrete tips to improve performance
  5. Waterfall diagram of requests
  6. Breakdown of content types

All tests in Pingdom were conducted for Europe/Frankfurt, as I host all stuff on below machine in my living room not far from Frankfurt.

The post in question is Fast integer compression with Stream VByte on ARM Neon processors. The version using Simplified Saaze is here. This post has no comments, therefore the WordPress site has no disadvantage against the Simplified Saaze powered site. This post contains C code shown in syntax-highlighted form.

The results are thus:

Original (WordPress) Modified (Simplified Saaze)

The results for the original website, based on WordPress, are indeed worse on every dimension: page size, load time, number of requests. In comparison to the modified version using Simplified Saaze the ratio is roughly:

  1. Page size is more than 4:1
  2. Load time is almost 3:1
  3. Number of requests is 4:1

So Simplified Saaze is better in all dimensions by a factor. This is particularly striking as the Simplified Saaze version is entirely self-hosted, i.e., upload to the internet is limited to 50 MBit/s!

The recommendations for the original website are therefore not overly surprising:

The missing compression is clearly an oversight on the web-server part.

The breakdown of the content type for the original website is:

I uploaded the Simplified Saaze version to Netlify, which provides CDN functionality. I measured again the WordPress post requested from San Francisco, and the Simplified Saaze version from San Francisco. The measurements are pretty similar to the Frankfurt results.

Original (WordPress) San Francisco Modified (Simplified Saaze) San Francisco

2. Modified website. The breakdown of the modified site, based on Simplified Saaze, is as below.

Actual loading of the modified site will roughly follow below waterfall diagram. This waterfall diagram shows that a major part of the loading time is spent in syntax highlighting (prism.js) and searching (pagefind). The fonts from Google load in record time.

3. Security considerations. Prof. Lemire's blog had been the target of a hack in 2008: My blog got hacked. Using a static site this attack could probably have been prevented, assuming HashOver is not affected.

End of 2008 problems still persisted: Need help protecting my blog.

A site using Markdown files as input is easy to backup. This is way easier to backup than a database. Just think about any schema changes in the databases during version upgrades. See Simplified Saaze:

Simplified Saaze works with ordinay files in your filesystem. No database required. This means less setup and maintenance, better security and more speed.

Storing your Markdown files in Git is one option.

4. Caching content. Prof. Lemire reported caching problems:

I estimate that I get somewhere between 30,000 and 50,000 unique visitors a month. Despite my efforts, my blog keeps on failing under the load. It becomes unavailable for hours.

These caching problems would go away with a static site. Obviously. The static site would handle the "Slashdot effect" quite effectively.

]]>
https://eklausmeier.goip.de/blog/2024/01-08-vodafone-internet-outage https://eklausmeier.goip.de/blog/2024/01-08-vodafone-internet-outage Vodafone Internet Outage Mon, 08 Jan 2024 20:10:00 +0100 Today, 08-Jan-2024, starting at 18:49 (CET), internet provided by Vodafone was unavailable. I called the hotline of Vodafone and they confirmed that they had a major outage in my region. This means: my homepage, i.e., this blog, is unavailable.

Photo

In 2022 the internet router was defective. This time Vodafone confirmed that the router is fine. The fault is on their end.

BetterUptime noticed the error in a timely fashion via e-mail, which, of course, I could not read, as I had no internet:

Monitor: eklausmeier.goip.de/…txt
Checked URL: GET https://eklausmeier.goip.de/betterUptime.txt
Cause: Failure when receiving data from the peer

Started at: 8 Jan 2024 at 06:53pm CET

Since around 22:00 (CET) internet is available again. BetterUptime reported a resolved incident at 22:55 (CET):

Monitor: eklausmeier.goip.de/…txt
Checked URL: GET https://eklausmeier.goip.de/betterUptime.txt
Cause: Failure when receiving data from the peer

Started at: 8 Jan 2024 at 06:53pm CET
Resolved at: 8 Jan 2024 at 10:55pm CET (automatically)
Length: 3 hours and 58 seconds

So overall, betterUptime did a good job here.

]]>
https://eklausmeier.goip.de/blog/2024/01-04-aus-allen-wolken-gefallen-cloud-repatriierung-rueckzug-aus-der-cloud https://eklausmeier.goip.de/blog/2024/01-04-aus-allen-wolken-gefallen-cloud-repatriierung-rueckzug-aus-der-cloud Aus allen Wolken gefallen: Cloud-Repatriierung, Rückzug aus der Cloud Thu, 04 Jan 2024 19:00:00 +0100

Cloud-Computing ist in aller Munde. Man hört:

  1. Cloud ist modern.
  2. Cloud ist grün.
  3. Cloud spare Kosten.
  4. Cloud ist verkaufsfördernd.

Jedoch, die Euphorie bekommt Risse. Die vorhergesagten Kostenersparnisse treten nicht ein. Der Aufbau einer Cloud-Infrastruktur benötigt Zeit und erfordert spezielle Kenntnisse und Erfahrungen, die nicht überall anzutreffen sind. Nun mehren sich die Berichte, daß eine Reihe von hochkarätigen Firmen die Cloud wieder verlassen und zu selbst-administrierten Rechenzentren oder Colocationen zurückkehren. Was ist passiert?

1. Der Cloud Markt. Bevor wir das beantworten, ein Blick auf die Entwicklung des Cloud-Marktes. Die bekannten Spieler im Markt sind:

  1. AWS - Amazon Web Services
  2. Google Cloud Platform
  3. Microsoft Azure
  4. Oracle Cloud

Daneben gibt es eine Vielzahl von kleineren und regionalen Anbietern. Beispielsweise erzeugte Flexential im November 2023 Negativschlagzeilen.

In nachfolgender Tabelle sind die Umsätze in Milliarden (109) USD angegeben, gerundet auf volle Milliarden.

Anbieter/Jahr 2016 2017 2018 2019 2020 2021 2022
AWS 12 17 26 35 45 62 80
Google Cloud 4 6 9 13 19 26
Azure ("Intelligent Cloud") 25 27 32 39 48 60 75

Zahlen aus den Geschäftsberichten, siehe Literatur.

Es herrschte und es herrscht weiterhin Goldgräberstimmung in Anbetracht obiger Umsätze.

2. Klischees. Der ehemalige Google Ingenieur Nima Badizadegan beschreibt in Use One Big Server, daß parallel zum Wachstum von Cloud-Infrastruktur auch die Rechenleistung von Hardware enorm gestiegen ist. Ein kleiner, moderner Server deckt den Bedarf zahlreicher Anwendungen problemlos ab. Mehr noch, die üblichen Klischees von den Vorteilen einer Cloud-Lösung stimmen nicht, oder nur zum Teil. Im Nachfolgenden zugespitzt zusammengefaßt.

  1. Benutze ich Cloud-Infrastruktur, dann benötige ich keine Systemadministratoren — nein, das stimmt nicht.
  2. Benutze ich Cloud-Infrastruktur, dann muß ich mich nicht um Sicherheits-Patches kümmern — nein, das stimmt nicht.
  3. Benutze ich Cloud-Infrastruktur, dann muß ich mich nicht darum kümmern, ob der Rechner gerade nicht verfügbar ist — nein, das stimmt nicht.
  4. Ich kann in einer Cloud-Infrastruktur schneller Software entwickeln &mdash nein, das stimmt nicht.
  5. Meine Arbeitslast für die Hardware ist sehr stark schwankend, mal viel, mal wenig — hier haben wir in der Tat einen echten Kandidaten.

Stark schwankende Rechenleistung ist der idealtypische Anwendungsfall für Cloud-Infrastruktur. Paart man dies mit der schnellen Verfügbarkeit der Rechenleistung, so begünstigt dies agiles Arbeiten und hohe Flexibilität. Hier ist auch der Fall der Miete von Spezialhardware zu nennen, wie beispielsweise GPUs oder FPGA. Ein Beispiel hierfür ist Modal Labs.

Ein anderes Klischee ist, daß Cloud-Infrastruktur besonders grün sei. Dazu muß man wissen, daß die großen Cloud-Anbieter, wie Amazon, Google, Microsoft, u.s.w., permanent Rechenkapazität vorhalten müssen. Andernfalls wäre es ja nicht möglich, daß kurzfristig zusätzliche Leistung zur Verfügung steht. Dieses Vorhalten von Leistung kostet Hardware und natürlich auch Strom und Abwärme, die abgeführt werden muß. Es ist das gleiche Dilemma, in dem auch Fondsgesellschaften stecken: Diese müssen permanent eine Barreserve vorhalten, sodaß sie Verkäufern des Fonds aus der Barreserve bedienen können, ohne direkt ihre Kerninvestments verkaufen zu müssen, um an Bargeld zu kommen. Ähnlich auch bei den Autoverleihern, wie Avis, Sixt, Europcar u.s.w.: Es müssen permanent Autos vorgehalten werden, um den Verleihdienst jederzeit anbieten zu können. Im Unterschied dazu hat derjenige, der sich die Hardware kauft, die er für seinen Zweck benötigt, nicht unnütz Kapazität in seinem Rechenzentrum.

Die Cloud ist insoweit grün, als daß der Cloud-Anbieter aus Eigeninteresse sein Rechenzentrum sehr effizient baut. D.h. Rechenzentren dieser Anbieter erreichen eine "Power Usage Effectiveness" von ca. 1,1 oder sogar darunter. Power Usage Effectiveness ist das Verhältnis von Gesamtleistung (Kühlung+Hardware) zu aufgenommer Leistung der IT-Hardware. Spätestens ab 2030 müssen alle Rechenzentren eine Power Usage Effectiveness von kleiner oder gleich 1,3 erreichen (§11 Abs. 1 & 2), siehe Das Energieeffizienzgesetz.

3. Kosten. Vergleicht man die Miete von Rechnern bei einem großen Cloud-Anbieter mit der Selbstbeschaffung entsprechender Rechner, so ist die Beschaffung und der Betrieb des eigenen Rechners fast immer günstiger. Und zwar deutlich günstiger. Hierzu Nima Badizadegan:

Being cloudy is expensive. Generally, I would anticipate a 5-30x price premium depending on what you buy from a cloud company, and depending on the baseline. Not 5-30%, a factor of between 5 and 30.

Dies sind keine Einzelbeobachtungen. Kristian Köhntopp, Cloud Architekt Syseleven und ehemals Firma Booking, nun HERE Technologies, kommt zu ganz ähnlichen Ergebnissen:

Weil Cloud unglaublich teuer ist. In meinen Kostenrechnungen liegen die Kosten für Cloud-Deployments pro Monat in etwa gleichauf mit (oder höher als!) Kostenrechnungen für Bare Metal in eigenen Rechenzentren pro Jahr (Strom, Netz, anteilige Netzwerk-Hardware, Rechenzentrumsplatz und alles andere inbegriffen).

Und weiter:

Andersherum bedeutet das, daß man das eigene Bare Metal hemmungslos überdimensionieren kann und immer noch unter AWS-Preisen herauskommt.

4. Repatriierung. Bei diesen immensen Kostenunterschieden ist es nur eine Frage der Zeit bis die ersten bekannten Firmen umsatteln.

  1. Dropbox wechselte von AWS zu einem eigenem Rechenzentrum und spart 75 Millionen USD.
  2. 37signals.com (Handelsmarken Basecamp und HEY) spart monatlich 60% der Kosten, das sind ca. 10 Millionen USD.
  3. Twitter berichtet Oktober 2023 ebenfalls von einer 60% Kostenreduktion, das sind 100 Millionen USD.

Optimized our usage of cloud service providers and began doing much more on-prem. This shift has reduced our monthly cloud costs by 60%.

  1. Sofascore reduzierte seine Kosten um den Faktor 10.

Es zeichnet sich sogar das Berufsbild eines Cloud-Repatriierungsexperten ab. Dies letztlich verursacht durch eine zu hastige und einseitige Migration zahlreicher Rechenzentren in die Cloud in den letzten Jahren. Nun, wo der Kostenschuh drückt, besinnt man sich auf kosteneffiziente und angepaßte RZ-Lösungen.

5. Fazit. Cloud-Infrastruktur erweitert die Möglichkeiten des Software-Architekten und bietet neue Möglichkeiten für Web-Anwendungen, Datenanalysen und Speicherung. Insbesondere ihre zügige Verfügbarkeit erleichtert agiles Arbeiten — Hardware auf Knopfdruck.

Hingegen, eine schemenhafte Anwendung neuer Dienste führt nicht zwangsläufig zu kosteneffizienten Lösungen. Die in manchen Unternehmen anzutreffende "Cloud first" Strategie (Vorrang der Cloud vor eigenem RZ) schränkt die Vielfalt der Möglichkeiten ein. Sie ersetzt die Tugenden Sparsamkeit und Handwerklichkeit durch Zeitgeist und Gehorsam.


Literatur

  1. Nima Badizadegan on Cloud Computing
  2. Cloudflare Dashboard Down
  3. Amazon Annual reports, proxies and shareholder letters
  4. Amazon 2018 Annual Report
  5. Amazon 2020 Annual Report
  6. Amazon 2021 Annual Report
  7. Alphabet 2022 Annual Report
  8. Microsoft Annual Report 2017
  9. Microsoft Annual Report 2020
  10. Microsoft Annual Report 2021
  11. Microsoft Annual Report 2022
  12. Various Quotes from Kristian Köhntopp
  13. David Heinemeier Hansson on Cloud Computing
  14. Josep Stuhli On Scaling to 20 Million Users
  15. Why companies are leaving the cloud


]]>
https://eklausmeier.goip.de/blog/2024/01-02-example-theme-for-simplified-saaze-lemire https://eklausmeier.goip.de/blog/2024/01-02-example-theme-for-simplified-saaze-lemire Example Theme for Simplified Saaze: Lemire Tue, 02 Jan 2024 20:00:00 +0100 Another theme for Simplified Saaze called "Lemire". You can inspect it here. This theme is modeled after the blog from Daniel Lemire. That blog is powered by WordPress and hosted on SiteGround and performance enhanded by Cloudflare since 2019. Prof. Lemire started blogging in 2004. The number of posts per year are given in below table. Year 2023 is not complete.

Year 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23
#posts 118 267 224 217 196 104 67 63 53 64 55 59 81 132 123 112 85 66 58 80
#comments 223 458 215 361 647 836 892 743 888 903 744 656 1340 1165 1005 1269 832 560 501 671

These numbers are given by:

for i in `seq 2004 2023`; do grep 'h2 class="entry-title"' b*.html | grep -c me/blog/$i/; done

In total there are 2,224 blog posts over 20 years of permanent blogging. It can clearly be seen that the blog is updated on a regular basis, and many readers interact with the content.

Prof. Lemire values to have control over his blog, therefore doesn't use Medium or similar offers. Some key functionalities:

  1. Allows WordPress comments
  2. Informs e-mail subscribers about new posts, he has over 12,500 mail subscribers
  3. Provides search-functionaly on his blog
  4. Doesn't show any advertisements
  5. Provides an Atom RSS feed
  6. Blog posts are all in English
  7. Doesn't use categories or tags
  8. Doesn't use the <!--more--> tag
  9. WordPress theme is based on "Twenty-Fifteen"
  10. There is no regular sitemap.xml for the blog posts

1. Converting WordPress blog. Download all blog posts via Perl script bloglemirecurl. This script downloads the so called "pages", which in turn contains 20 blog posts. This HTML file, which contains 20 blog posts, is then converted to Markdown via Perl script bloglemiremd.

bloglemiremd b*.html

The Markdown files are placed in /tmp/lemire. As usual you might need a few rounds to eliminate obvious conversion errors. Finally you copy the Markdown files from /tmp/lemire to your final destination.

There are 14 blog posts, which reside at the top of the directory, which are not part of the timeline. These posts are accessed via the left navigation bar (in blue). To convert these posts use

bloglemiremd -t *-*.html pred*.html

Again, the converted HTML files are stored under /tmp/lemire for inspection. Once you are fine with them, copy them to the final destination.

Go to .../content/blog and run below loop using blogdate to create an index.md for each year:

for i in `seq 2004 2023`; do blogdate -p/lemire/blog/ -y$i $i/*.md > $i/index.md; done

Embedding icon in head-template file:

  1. Download icon: curl https://lemire.me/blog/wp-content/uploads/2015/10/profile2011_152-150x150.jpg -o pr.jpg
  2. Converting to 32x32 size: convert -resize 32x32 pr.jpg pr32x32.jpg
  3. Base64-encoding file: base64 -w0 pr32x32.jpg

Size comparison for this icon: original JPG is 6,699 bytes, converted image is 934 bytes, base64-encoded is finally 1,248 bytes.

2. Installation. The entire theme including content and Simplified Saaze is installed via composer.

$ time composer create-project eklausme/saaze-lemire
Creating a "eklausme/saaze-lemire" project at "./saaze-lemire"
Installing eklausme/saaze-lemire (v1.0)
  - Downloading eklausme/saaze-lemire (v1.0)
  - Installing eklausme/saaze-lemire (v1.0): Extracting archive
Created project in /tmp/saaze-lemire
Loading composer repositories with package information
Updating dependencies
Lock file operations: 1 install, 0 updates, 0 removals
  - Locking eklausme/saaze (v1.34)
Writing lock file
Installing dependencies from lock file (including require-dev)
Package operations: 1 install, 0 updates, 0 removals
  - Downloading eklausme/saaze (v1.34)
  - Installing eklausme/saaze (v1.34): Extracting archive
Generating optimized autoload files
No security vulnerability advisories found.
        real 1.85s
        user 0.27s
        sys 0
        swapped 0
        total space 0

You need to compile a single C file once:

cd vendor/eklausme/saaze
cc -fPIC -Wall -O2 -shared php_md4c_toHtml.c -o php_md4c_toHtml.so -lmd4c-html

Now you can run php saaze.

As mentioned Simplified Saaze is already installed via above composer command. In case you want to take a separate view at the Simplified Saaze source code see saaze.

3. Building static site. Running Simplified Saaze on all 2,224 blog posts:

saaze-lemire: time php saaze -rb /tmp/build
Building static site in /tmp/build...
        execute(): filePath=/home/klm/php/saaze-lemire/content/blog.yml, nentries=2224, totalPages=112, entries_per_page=20
Finished creating 1 collections, 1 with index, and 2259 entries (0.39 secs / 22.55MB)
#collections=1, YamlParser=0.0314/2260-1, md2html=0.0362, MathParser=0.0167/2259, renderEntry=2259, content=2259/0, excerpt=0/0
        real 0.41s
        user 0.26s
        sys 0
        swapped 0
        total space 0

In less than half a second the generation of all static files is completed. Machine in question: CPU is Ryzen 7 5700G, max clock 4.6 GHz, running on Arch Linux with kernel 6.6.8.

A screenshot of the theme is here:

Photo

The screenshot shows the results of a search, here for "WordPress".

The theme also features Pagefind. I have written on Pagefind: Searching in Static Sites. Creating the Pagefind index goes like this:

/tmp/build: time pagefind -s . --exclude-selectors aside --exclude-selectors footer

Running Pagefind v1.0.4
Running from: "/tmp/build"
Source:       ""
Output:       "pagefind"

[Walking source directory]
Found 2372 files matching **/*.{html}

[Parsing files]
Did not find a data-pagefind-body element on the site.
↳ Indexing all <body> elements on the site.

[Reading languages]
Discovered 1 language: en

[Building search indexes]
Total:
  Indexed 1 language
  Indexed 2372 pages
  Indexed 29164 words
  Indexed 0 filters
  Indexed 0 sorts

Finished in 5.325 seconds
        real 5.43s
        user 4.50s
        sys 0
        swapped 0
        total space 0

The index creation is way slower than creating all static pages.

4. Webserver rewrite rules. The conversion from WordPress to Markdown placed all blog posts from one year into a single directory at the same level. For example, the posts

https://lemire.me/blog/2006/01/03/are-debuggers-obselete/

is in directory .../content/blog/2006 and in file

01-03-are-debuggers-obselete.md

On my webserver the URL can be both, watch out for dash vs. slash:

  1. https://eklausmeier.goip.de/lemire/blog/2006/01-03-are-debuggers-obselete
  2. https://eklausmeier.goip.de/lemire/blog/2006/01/03/are-debuggers-obselete

Watch out for the / slashes. This is accomplished by below rewriting rule in the NGINX configuration file:

rewrite "^/lemire/blog/(\d\d\d\d)/(\d\d)/(\d\d)/(.*)"  "/lemire/blog/$1/$2-$3-$4";

Instead of above rewriting rule once could place above Markdown file in the following directory

.../content/blog/2006/01/03

But this would create a lot of directories, which essentially all contain only a single file.

5. Fetching comments from WordPress. Perl script bloglemirecurlcomment scans through above "pages", i.e., collection of 20 blog posts. These pages contain 20 URLs. These URLs are fetched via curl. Essentially, this duplicates the blog posts, but at least we now have the comments for each post as well.

for i in `seq 1 112`; do bloglemirecurlcomment ../b$i.html; done

These HTML files are then processed by bloglemirecomment, which scans for <h2 class="comments-title"> and writes out the comment file. Each comment file is generated from the original blog post file by adding the word -comment- to the file name after the day.

Type File name
Blog post /blog/yyyy/mm/dd/title.html
Comment file /blog/yyyy/mm-dd-comment-title.md

Each comment file has index: false, i.e., it will not show up in the index. Though, all content is fully searchable.

In addition the Perl script blogdate adds a link to each comment file. Calling is like:

for i in `seq 2004 2023`; do ( cd $i; ~/php/saaze-lemire/bin/blogdate -y$i *.md > index.md ) done

Counting the number of comments per year is like:

#!/bin/perl -W
# Count comments per year

use strict;

my ($year,%H) = (0,());

while (<>) {
    $year = $1 if (/<link rel="canonical" href="https:\/\/lemire.me\/blog\/(\d\d\d\d)\/(\d\d)\/(\d\d)\//);
    if (/(\w+) thought(|s) on &ldquo;/) {
        my $cnt = $1;
        $cnt = 1 if ($cnt eq 'One');
        $H{$year} += $cnt;
    }
}

for (sort keys %H) {
    printf("%04d\t%d\n",$_,$H{$_});
}

6. Building static site with separate comment pages. Generating all static pages for the entire blog including comments is:

saaze-lemire: time php saaze -rb /tmp/build
Building static site in /tmp/build...
        execute(): filePath=/home/klm/php/saaze-lemire/content/blog.yml, nentries=2224, totalPages=112, entries_per_page=20
Finished creating 1 collections, 1 with index, and 3935 entries (0.89 secs / 66.49MB)
#collections=1, YamlParser=0.0630/3936-1, md2html=0.0895, MathParser=0.0575/3935, renderEntry=3935, content=3935/0, excerpt=0/0
        real 0.91s
        user 0.56s
        sys 0
        swapped 0
        total space 0

This time can be reduced to 0.46 seconds, see Parallelizing the Output of Simplified Saaze.

Generating the pagefind index for 4048 files takes roughly 12 seconds:

/tmp/build: time pagefind -s . --exclude-selectors aside --exclude-selectors footer

Running Pagefind v1.0.4
Running from: "/tmp/build"
Source:       ""
Output:       "pagefind"

[Walking source directory]
Found 4048 files matching **/*.{html}

[Parsing files]
Did not find a data-pagefind-body element on the site.
↳ Indexing all <body> elements on the site.

[Reading languages]
Discovered 1 language: en

[Building search indexes]
Total:
  Indexed 1 language
  Indexed 4048 pages
  Indexed 60783 words
  Indexed 0 filters
  Indexed 0 sorts

Finished in 11.412 seconds
        real 11.59s
        user 10.22s
        sys 0
        swapped 0
        total space 0

Simplified Saaze allows to generate single files, i.e., only a single blog post can be processed by Simplified Saaze, see Single file generation. This can be used to significantly reduce the generation time.

7. HTML validation. The original site lemire.me contains more than 90 warnings and errors. See W3 Nu Html Checker.

The new site contains no errors or warnings.

8. Recap. Prof. Lemire is quite hesitant to move all static:

Several commenters pointed out that I could just drop WordPress and use something else. I fear that they greatly underestimate how hard this would be. Yes, I know about things like Hugo. My relatively simple home page is built using Hugo… and it took me nearly took weeks of hacking to get it to be how I want. Porting my blog to something like Hugo would be a major disruption, might imply moving to disqus (see point above) and so forth.

Porting Prof. Lemire's blog started in 12-Dec-2023 and was "finished" 14-Jan-2024 including porting all comments to HashOver. Of course, I did not work on this full-time.

There are still some open issues pending regarding conversion and functionality:

  1. Some pages have wrong formatting, e.g., there is bold printing in the converted site not present in the original.
  2. Left and right double quotes have been converted to HTML codes. Entering those is not very convenient. We clearly want SmartyPants.
  3. Five URLs were not correctly mapped as they contain special characters.
  4. E-mail subscriptions is absent. Although I doubt that there really 12,500 active subscribers. Though, there are probably a lot, which want to get noticed when something new arrives. One possible approach is to use Buttondown. For example, Buttondown can send e-mails based on RSS, see below screenshot from the "Settings" dialog in Buttondown.

Tool Purpose Technology
Simplified Saaze Static site generator PHP, C
HashOver Commenting system PHP, XML/JSON/SQLite
Pagefind Static search JavaScript, Rust, WebAssembly
]]>
https://eklausmeier.goip.de/blog/2023/12-03-converting-bachelor-thesis-from-latex-to-markdown https://eklausmeier.goip.de/blog/2023/12-03-converting-bachelor-thesis-from-latex-to-markdown Converting Bachelor Thesis from LaTeX to Markdown Sun, 03 Dec 2023 19:00:00 +0100 1. Problem statement. You have a Bachelor Thesis in LaTeX. This thesis is converted to Markdown. I had written on a similar topic here: Converting Journal Article from LaTeX to Markdown.

2. Solution. I already knew that a Pandoc approach does not work. For the conversion I modified the two Perl scripts used for the journal article conversion:

  1. blogparsec
  2. blogbibtex

The result is in:

  1. blogOnlineDialARide
  2. blogbibtex

Using those two script, creating the Markdown file goes like this:

blogOnlineDialARide einleitung.tex chapter1.tex chapter2.tex chapter3.tex > .../2020/10-15-online-dial-a-ride.md
blogbibtex thesis.bib >> .../2020/10-15-online-dial-a-ride.md

The file 10-15-online-dial-a-ride.md still needs some manual editing:

  1. move the table-of-content to the top, as this is appended at the end with blogbibtex
  2. insert an image for the algorithm

3. blogOnlineDialARide script. Some notes on this Perl script. The input to this script is the concatenation of all relevant LaTeX files.

First define some variables and use strict mode.

use strict;
my ($ignore,$inTable,$inAlgo,
    $chapterCnt,$sectionCnt,$subSectionCnt,$theoremCnt,$itemCnt,
    $claimCnt,$eqnCnt,$eqnFlag,$tableCnt,$tabInsert,$caseCnt,
    $enumerate,$prefix) = (0,0,0, 0,0,0,0,0, 0,0,0,0,0,0, 0,"");
my (@sections) = ();
my (%H,%Hphp) = ( (), () );  # hash for key=\label, value=\ref, in our case for lemmatas and theorems

The frontmatter header is a simple here-document. Also, it defines some PHP variables, which are needed as the thesis makes some forward references to tables, and lemmas, and we want a single pass over the document file only.

print <<'EOF';
---
date: "2020-10-15 14:00:00"
title: "Online Dial-A-Ride"
description: "We consider the online Dial-a-Ride Problem where objects are to be transported between points in a metric space in the shortest possible completion time."
MathJax: true
categories: ["mathematics"]
tags: ["ABORT-OR-REPLAN", "Dial-A-Ride", "online optimization"]
author: "Roman Edenhofer"
---


<!-- https://docs.mathjax.org/en/latest/input/tex/eqnumbers.html -->
<script type="text/javascript">
    window.MathJax = { tex: { tags: 'ams' } };
</script>

<?php	// forward references in text
    $tab__ABORT = "1";
    $tab__AAW = "2";
    $tab__state_of_the_art = "3";
    $lemma__new_extreme = "3.11";
    $lemma__waiting = "3.12";
    $lemma__aborting = "3.13";
    $lemma__abc = "3.14";
    $lemma__unique_tour = "3.15";
    $lemma__upwards = "!unknown!";
?>

EOF

The main loop looks at each line in main.tex. After the loop the literature section is added, then all sections collected so far are printed.

while (<>) {
    chomp;
    if (/\\end\{tabular\}/) { $ignore = 0; next; }
    next if ($ignore);

    next if (/\\addcontentsline\{toc\}/);
    (...)
    print $prefix . $_ . "\n";
}


print "## Literature<a id=Literature></a>\n";
for (@sections) {
    print $_ . "\n";
}
++$sectionCnt;
print "- [$sectionCnt. Literature](#Literature)\n";

What follows is the part which is marked as (...) in above code. First of all, just drop irrelevant space.

    # Space handling
    s/\s+$//g;	# rtrim
    s/^\s+//g;	# ltrim, i.e., erase leading space
    s/\~/ /g;

    s/\s+%\s+[^%].+$//;	# Drop LaTeX comments
    s/^%.*//g;

The tables are replaces by manually entered Markdown tables:

    s/\\normalsize//;
    if (/\\end\{table\}/) {
        print $table[$tabInsert++];
        $inTable = 0;
        next;
    }
    if ($inTable) {
        s/\\caption\{([^\}]+)\}/\n\n__Table $tableCnt:__ $1\n/;
    }
    if (/\\begin\{table\}/) {
        ($ignore,$inTable) = (1,1);
        next;
    }

The @table array is initialized at the top of the Perl file like this:

my @table = (
'
Case    | ABORT                      | open | closed
--------|----------------------------|------|---------
general | uncapacitated ($c=\infty$) | 3    | 2.5
general | preemptive                 | 3    | 2.5

',
'
Case    | ABORT-AND-WAIT             | open   | closed
--------|----------------------------|--------|---------
general | uncapacitated ($c=\infty$) | 2.4142 | 2.5
general | preemptive                 | 2.4142 | 2.5

',
'
Case    | General Bounds                | open<br>lower bound | open<br>upper bound | closed<br>lower bound | closed<br>upper bound
--------|-------------------------------|---------------------|---------------------|-----------------------|----------------------
general | non-preemptive $(c < \infty)$ | 2.0585 | 2.6180 ([MLipmann][]) | 2 | 2 ([Ascheuer][])
general | uncapacitated $(c=\infty)$    | 2.0346 | 2.4142 ([BjeldeDisser17][]) | 2 | 2
general | preemptive                    | 2.0346 | __2.4142__ | 2 (Thm 3.2 in [Ausiello][]) | 2
general | TSP                           | 2.0346 | 2.4142 | 2 | 2
---     |                               |        |        |   |
line    | non-preemptive $(c < \infty)$ | 2.0585 (Thm 1 in [Birx19][]) | 2.6180 | 1.75 ([BjeldeDisser17][]) | 2
line    | uncapacitated $(c=\infty)$    | 2.0346 | 2.4142 | 1.6404 | 2
line    | preemptive                    | 2.0346 | 2.4142 ([BjeldeDisser17][]) | 1.6404 | 2
line    | TSP                           | 2.0346 ([BjeldeDisser17][]) | 2.4142 ([BjeldeDisser17][]) | 1.6404 (Thm 3.3 in [Ausiello][]) | 1.6404 ([BjeldeDisser17][])
---     |                               |        |        |   |
halfline| non-preemptive $(c < \infty)$ | 1.8968 ([MLipmann][]) | 2.6180 | 1.7071 ([Ascheuer][]) | 2
halfline| uncapacitated $(c=\infty)$    | 1.6272 | 2.4142 | 1.5 | __1.8536__
halfline| preemptive                    | 1.6272 | 2.4142 | 2 | 2
halfline| TSP                           | 1.6272 | 2.4142 ([MLipmann][]) | 1.5 ([MRIN][]) | 1.5 ([MRIN][])


'
);

Then these predefined elements are inserted one by one: $table[0], $table[1], etc.

Many special cases are handled, which are specific to this document.

    # Special cases:
    s/ \\AOR-server / AOR-server /g;	# \AOR outside of math-mode
    s/ while \\\\ / while /;
    # MathJax bug prevention
    s/^Suppose that \$\(L\^\*/Suppose that\n\$\$\n\(L\^\*/;
    s/ p_R\$\./ p_R\.\n\$\$/;
    s/L\^\*/L\^\{\\ast\}/g;
    s/\{t\^start\}/\{eqn: t\^start\}/g;
    # forward reference resolution
    s/\\ref\{lemma: waiting\}/[\<\?=\$lemma__waiting\?>\](#lemma__waiting)/g;
    # MathJax shortcoming
    s/\\makebox\[0pt\]\{\\text\{(|\\scriptsize)/\{\{/;
    # double } resolution in eqn:
    s/eqn: OPT\(t_\{i-j\}\)/eqn: OPT\(t_Ci-jD\)/g;
    s/eqn: p\^\{AOR\}/eqn: p^cAORd/g;
    s/eqn: T\^\{return\}/eqn: T\^CreturnD/g;
    s/eqn: L\^\{\\ast\}/eqn: L\^C\\astD/g;
    # Some simple conversions to Markdown
    s/\\textit\{([^\}]+)\}/_$1_/g;

Display math is enclosed in double dollars keeping the \begin{align} and \end{align} stuff:

    if (/\\begin\{align(|\*)\}/) {
        print "\$\$\n\\begin{align$1}\n";
        next;
    } elsif (/\\end\{align(|\*)\}/) {
        if ($eqnFlag) { print "\\end{align$1}\n\t\\tag{$eqnCnt}\n\$\$\n"; $eqnFlag = 0; }
        else { print "\\end{align$1}\n\$\$\n"; }
        next;
    }

Most algorithms are replaces by HTML quotations. One algorithm, which are particular "complex" is just replaced by an image (screenshot).

    if (/\\begin\{algorithm\}/) {
        ($inAlgo,$prefix) = (1,'> ');
        next;
    } elsif (/\\end\{algorithm\}/) {
        ($inAlgo,$prefix) = (0,'');
        next;
    }
    if ($inAlgo == 1) {
        next if (/\\SetKwData|\\SetKwFunction|\\SetKwInOut/);
        s/\\;$/<br>/;
        s/\\caption\{(.+)\}$/__$1__<br>/;
        s/\\Input\{(.+)\}$/__input:__ $1/;
        s/\\Output\{(.+)\}$/__output:__ $1/;
    }

The most difficult part was to replace numbered theorems, lemmas, definitions, claims with something automatic. I use a combination of Perl numbering, and PHP variables. So forward or back references look like this: [look here: $Perl variable](#$PHPvariable).

    ++$theoremCnt if (/\\begin\{(theorem|lemma|remark)\}/);
    ++$claimCnt if (/\\begin\{claim\}/);
    ++$caseCnt if (/\\begin\{case\}/);
    s/\\begin\{definition\}/<p><\/p>\n\n---\n\n__Definition.__/;
    s/\\begin\{theorem\}/<p><\/p>\n\n---\n\n__Theorem ${chapterCnt}.${theoremCnt}.__/;
    s/\\begin\{lemma\}/<p><\/p>\n\n---\n\n__Lemma ${chapterCnt}.${theoremCnt}.__/;
    s/\\begin\{remark\}/<p><\/p>\n\n---\n\n__Remark ${chapterCnt}.${theoremCnt}.__/;
    s/\\begin\{claim\}/<p><\/p>\n\n__Claim ${claimCnt}.__/;
    s/\\begin\{case\}/<p><\/p>\n\n_Case ${caseCnt}._/;
    s/\\end\{(theorem|lemma|remark|claim|case)\}//;
    s/\\end\{definition\}/\n---\n<p><\/p>\n/;
    s/\\begin\{proof\}/<p><\/p>\n\n_Proof._/;
    s/\\end\{proof\}/&nbsp; &nbsp; &#9744;\n\n/;

    if (/^\\label\{(.+)\}$/) {
        my ($phpvar,$key) = ($1,$1);
        $phpvar =~ s/( |:|"|\^|\{|\}|<|>|\\|\/|\*)/_/g;	# create valid PHP variable out of \label
        $Hphp{$key} = $phpvar;
        if ($key =~ /^(th|lemma)/) {
            $H{$key} = "${chapterCnt}.${theoremCnt}";
        } elsif ($key =~ /^eqn/) {
            ++$eqnCnt;
            $eqnFlag = 1;
            $H{$key} = "${eqnCnt}";
            next;
        } elsif ($key =~ /^claim/) {
            $H{$key} = "${claimCnt}";
        } elsif ($key =~ /^chapter/) {
            $H{$key} = "s${chapterCnt}";
        } elsif ($key =~ /^tab/) {
            ++$tableCnt if (!defined($H{$key}));
            $H{$key} = "${tableCnt}";
        } else {
            $H{$key} = "unknown hash H: key=$key";
        }
        #$_ = '<a id="'.$phpvar.'"></a>';
        $_ = '<a id="'.$phpvar.'"></a><?php $'.$phpvar.'="'.$H{$key}.'"; ?>';
    }
    #s/\\ref\{(.+?)\}(\)|\.| )/\[$H{$1}\](#s$H{$1})$2/g;
    #s/\\ref\{(.+?)\}(\.| )/\[$H{$1}\](#"s$1")$2/g;
    #s/\\ref\{(.+?)\}(\.| )/\[$H{$1}\](#\*<\?=\$$Hphp{$1}\?>\*)$2/g;
    #s/\\ref\{(.+?)\}(\.| )/\[$H{$1}\](#$Hphp{$1})$2/g;
    #good (almost): s/\\ref\{(.+?)\}(\.| )/\[<\?=\$$Hphp{$1}\?>\](#$Hphp{$1})$2/g;
    #while (/\\ref\{(.+?)\}(\.|\)| )/g) {
    while (/\\ref\{([^\}]+?)\}/g) {
        my $key = $1;
        if (!defined($H{$key})) {
            print STDERR "key=|$key| undefined in H\n";
            my $phpvar = $1;
            $phpvar =~ s/( |:|"|\^|\{|\}|<|>|\\|\/|\*)/_/g;	# create valid PHP variable out of \label
            $Hphp{$key} = $phpvar;
            if ($key =~ /^tab/) {	# unfortunately, tables are forward referenced
                ++$tableCnt;
                $H{$key} = "tab${tableCnt}";
            }
        }
        if ($key =~ /^eqn/) {
            s/\\ref\{(.+?)\}/$H{$1}/g;
        } else {
            s/\\ref\{(.+?)\}(\.|\)| )/\[<\?=\$$Hphp{$1}\?>\](#$Hphp{$1})$2/g;
        }
    }

Again, many thesis specific changes.

    # Substitute own TeX macros
    s/\\N([^\w])/\\mathbb\{N\}$1/g;
    s/\\R([^\w])/\\mathbb\{R\}$1/g;
    #s/\\Q([^\w])/\\mathbb\{Q\}$1/g;
    #s/\\M([^\w])/\\mathcal\{M\}$1/g;
    s/\\ABORT/\\hbox\{ABORT\}/g;
    s/\\OPT/\\hbox\{OPT\}/g;
    s/\\ALG/\\hbox\{ALG\}/g;
    s/\\AAW/\\hbox\{AAW\}/g;
    s/\\AOR/\\hbox\{AOR\}/g;
    s/\\DOWN/\\hbox\{DOWN\}/g;
    s/\\abort/\\hbox\{abort\}/g;
    s/\\replan/\\hbox\{replan\}/g;
    s/\\diff/\\hbox\{diff\}/g;
    s/\\prepared/\\hbox\{prepared\}/g;
    s/\\start/\\hbox\{start\}/g;
    s/\\ente/\\hbox\{ente\}/g;
    s/\\move/\\hbox\{move\}/g;
    s/\\waituntil/\\hbox\{waituntil\}/g;
    s/\\return/\\hbox\{return\}/g;
    s/\\new/\\hbox\{new\}/g;
    s/\\Return/__return:__/g;

    s/\\Tilde/\\tilde/g;

    # Lines to drop, not relevant
    next if (/\\DontPrintSemicolon/);

Handling of items in LaTeX.

    if (/\\begin\{itemize\}/) {
        ($enumerate,$itemCnt,$_) = (0,1,'');
    } elsif (/\\begin\{enumerate\}/) {
        ($enumerate,$itemCnt,$_) = (1,1,'');
    } elsif (/\\end\{(itemize|enumerate)\}/) {
        ($enumerate,$itemCnt,$_) = (0,0,'');
    }
    if (/^\\item /) {
        if ($enumerate) {
            s/\\item /${itemCnt}. /;
            ++$itemCnt;
        } else {
            s/\\item /\* /;
        }
    }
    if (/\\item\[([^\]]+)\]/) {
        s/\\item\[([^\]]+)\]/${itemCnt}. /;
        ++$itemCnt;
    }

Handling of chapters, sections, and subsections. For all three I uses different Perl counters. All chapters, sections, etc. can be jumped to. They are referenced by #s, followed by chapter number, section number, etc.

    # sections + subsections
    if (/\\chapter\*\{(\w+)\}/) {	# unnumbered section, line "Introduction"
        my $s = $1;
        push @sections, "- [$s](#s$s)";
        $_ = "\n## $s<a id=s$s></a>\n";
    } elsif (/\\chapter\{(.+?)\}\s*$/) {
        my $s = $1;
        ++$chapterCnt; $sectionCnt = 0; $subSectionCnt = 0; $theoremCnt = 0;
        push @sections, "- [$chapterCnt. $s](#s$chapterCnt)";
        $_ = "\n## $chapterCnt. $s<a id=s$chapterCnt></a>\n";
    } elsif (/\\section\{(.+?)\}\s*$/) {
        my $s = $1;
        ++$sectionCnt; $subSectionCnt = 0;
        push @sections, "- [$chapterCnt.$sectionCnt $s](#s${chapterCnt}_${sectionCnt})";
        $_ = "\n### $chapterCnt.$sectionCnt $s<a id=s${chapterCnt}_$sectionCnt></a>\n";
    } elsif (/\\subsection\{(.+?)\}\s*$/) {
        my $s = $1;
        ++$subSectionCnt;
        push @sections, "\t- [$chapterCnt.$sectionCnt.$subSectionCnt $s](#s${chapterCnt}_${sectionCnt}_$subSectionCnt)";
        $_ = "\n#### $chapterCnt.$sectionCnt.$subSectionCnt $s<a id=s${chapterCnt}_${sectionCnt}_$subSectionCnt></a>\n";
    }

Citations are easy. I use a feature in Markdown/Commonmark called Link references. The link in the text is like this [literature123][], the actual definition can be anywhere, for example, at the end of the document. It looks like this: [literature123]: ....

    # Citations
    s/\\citeauthor\{([\-\w]+)\} \\cite\{([\-\w]+)\}/\[$1\]\[\]/g;

During development of this Perl script I used Beyond Compare again quite intensively, to compare the original against the changed file.

4. blogbibtex script. The input to this script is the Bibtex file with all literature references. The Bibtex file looks something like this:

@inproceedings{Ascheuer,
author = {Ascheuer, Norbert and Krumke, Sven Oliver and Rambau, J\"{o}rg},
title = {Online Dial-a-Ride Problems: Minimizing the Completion Time},
year = {2000},
isbn = {3540671412},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
booktitle = {Proceedings of the 17th Annual Symposium on Theoretical Aspects of Computer Science},
pages = {639–650},
numpages = {12},
series = {STACS '00}
}

@article{Ausiello,
author = {Ausiello, Giorgio and Feuerstein, Esteban and Leonardi, S. and Stougie, L. and Talamo, Maurizio},
year = {2001},
month = {04},
pages = {560-581},
title = {Algorithms for the On-Line Travelling Salesman},
volume = {29},
journal = {Algorithmica},
doi = {10.1007/s004530010071}
}

The Perl script is just a slightly modified version of the script used in Converting Journal Article from LaTeX to Markdown.

]]>
https://eklausmeier.goip.de/blog/2023/12-01-our-neighborhood-in-the-milky-way-in-3d https://eklausmeier.goip.de/blog/2023/12-01-our-neighborhood-in-the-milky-way-in-3d Our Neighborhood in the Milky Way in 3D Fri, 01 Dec 2023 10:40:00 +0100 Press release

High-resolution three-dimensional maps of the Milky Way have previously been limited to the immediate vicinity of the Sun. In a collaboration led by the Max Planck Institute for Astrophysics with researchers from Harvard, the Space Telescope Science Institute, and the University of Toronto, we were now able to build a high-resolution map of the Milky Way in 3D out to more than 4,000 light-years. The produced 3D map will be highly useful for a wide range of applications from star formation to cosmological foreground correction.

When we think about the Milky Way, we often think about 2D images of the night sky or artist's impressions of how the Milky Way might look from outside our Galaxy. With the advent of Gaia, we are entering a new era of Milky Way science, in which we begin to unfold our previous 2D view of the Milky Way into a rich 3D picture. In recent years, we started to build 3D maps of the distribution of matter in the immediate vicinity of the Sun out to approximately 1,000 light-years. Thanks to these maps, we were able to study the star formation around the Sun in 3D, made numerous discoveries about the shape, mass, and density of nearby molecular clouds, and learned how supernova feedback shaped the space around the Sun.

At the core of maps of the 3D distribution of matter in the Milky Way lies interstellar dust. Interstellar dust closely traces the distribution of matter, cools gas such that stars can form, agglomerates to form planets, and obscures astrophysical observations. Incidentally, this obscuration allows us to quantify the amount of dust between us, on Earth, and the astrophysical object we want to observe in the background, often stars. We can infer the 3D distribution of dust and thus indirectly trace the distribution of matter in the Galaxy using this information. To do so, we combine millions of measurements of the amount of dust to background objects with distance estimates to said objects from Gaia.

Inferring the distribution of dust in the Milky Way from distances and dust measurements is a computationally intensive, statistical inverse problem. The problem is ill posed: from our limited data and prior knowledge about dust, it is not possible to retrieve a definite answer about the true distribution of dust. Still, the language of statistics allows us to translate our noisy data with a physics-informed model of dust into a 3D dust map with rigorously quantified uncertainties. Until now, however, the computational costs of 3D dust models have limited the size of the probed volume.

Bird's-eye view of the distribution of dust within 4,077 light-years around the Sun. The Sun is at the center and the galactic center is to the right.

Recent progress in our physics-informed model of dust enabled us to probe much larger distances. We put forward a new statistical method to model spatially smooth structures in large volumes – a required component of dust maps. At the heart of the new method is an algorithm to iteratively add ever-finer details to a coarse representation of 3D dust. By adding details iteratively instead of modelling everything at once, the modelling problem drastically simplifies and becomes faster by orders of magnitude.

We combined the new methodological developments with the latest processed Gaia data to create the largest high-resolution map of interstellar dust to date. The new 3D dust map extends 4,077 light-years in all directions from the Sun with a resolution of a few light-years. The produced 3D map will be highly useful for studying the medium between stars in the Milky Way. Understanding the structure of the interstellar medium will help us constrain key relations for star formation. In addition, the 3D dust map will be important for correcting astrophysical observations. For many observations, the interstellar medium in front of the object of interest is a nuisance. The new 3D dust map will allow correcting these measurements for the foreground material in a much larger volume than previous maps.

The distribution of dust out to 4,077 light-years around the Sun rotating around the galactic z-axis. The red line indicates the galactic x-axis toward the galactic center, the green line the galactic y-axis, and the blue line the galactic z-axis. ]]>
https://eklausmeier.goip.de/blog/2023/11-29-linux-on-android-devices https://eklausmeier.goip.de/blog/2023/11-29-linux-on-android-devices Linux on Android Devices Wed, 29 Nov 2023 20:00:00 +0100 Android is based on Linux. Unfortunately, the Linux on Android devices is severely restricted, in particular you cannot easily become the root user. Using Termux and the like you can get a little bit of the "usual" Linux feeling on Android devices. In addition to that, each hardware manufacturer, like Samsung etc., modify the Android system and use different kernels.

As of today there are some serious efforts to replace Android with a "real" Linux.

# Linux ## Android ### Samsung Android ### Oppo ColorOS ### Oneplus OxygenOS ### Xiaomi MIUI ### ... ### CyanogenMod #### LineageOS ## Alpine Linux ### postmarketOS ## Palm webOS ### webOS

It is speculated that the postmarketOS approach is most promising. In below, I copy various citations from their website.

We are sick of not receiving updates shortly after buying new phones. Sick of the walled gardens deeply integrated into Android and iOS.

The heritage:

postmarketOS is based on Alpine Linux, which is so tiny (less than 10 MB in size) that development of pmOS can be done quickly on any Linux distribution.

The consequence of this:

The above design decisions make it feasible to keep the system up-to-date, for all devices at once! Compared to Android, it makes development more efficient ...

The rather cumbersome Android build system is not used:

We avoid Android's build system entirely. Instead of building a monolithic system image for each and every device, the whole OS is divided into small packages. These same package binaries can be installed on all devices that share the same CPU architecture. Device specific parts are kept as minimal as possible, ideally there is only one device package.

postmarketOS uses the ext4 filesystem!

From the FAQ: Will Android apps be supported?

We support Android apps through Waydroid!

The list of supported devices is quite impressive.

postmarketOS can use below user interfaces:

  1. Phosh based on GNOME
  2. Plasma Mobile based on KDE
  3. Sxmo, a tiling window manager
  4. Xfce
  5. MATE based on GNOME
  6. and others
]]>
https://eklausmeier.goip.de/blog/2023/11-14-introduction-to-mle-small-terminal-based-editor https://eklausmeier.goip.de/blog/2023/11-14-introduction-to-mle-small-terminal-based-editor Introduction to mle: Small Terminal Based Editor Tue, 14 Nov 2023 18:00:00 +0100 1. Motivation. I am a regular user of vi/vim/neovim. But one thing, though, is a little bit annoying, when using neovim: even on a fast machine starting neovim takes quite a considerable time to start. Though, this is mostly caused by an elaborate initialization file. mle is a text editor written by Adam Saponara. Adam Saponara was mentioned multiple times in the talks of Rasmus Lerdorf, the creator of PHP. mle as of version 1.7.2 is written in C and is less than 17 kLines.

Source files Number LOC
*.c 64 12,098
*.h 5 4,631

2. Installation. The accompanying Makefile is ready to use, i.e., no configure is required. Just compile (=make) and install (=make install). On Arch Linux use AUR package mle. One particular good AUR helper is trizen.

3. Size comparison. Comparing the library dependencies for mle, vim and nvim:

$ ldd /bin/mle /bin/vim /bin/nvim
/bin/mle:
        linux-vdso.so.1 (0x00007fff4e28d000)
        libpcre2-8.so.0 => /usr/lib/libpcre2-8.so.0 (0x00007fb7a6246000)
        liblua.so.5.4 => /usr/lib/liblua.so.5.4 (0x00007fb7a61ff000)
        libm.so.6 => /usr/lib/libm.so.6 (0x00007fb7a6112000)
        libc.so.6 => /usr/lib/libc.so.6 (0x00007fb7a5f30000)
        /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fb7a6362000)
/bin/vim:
        linux-vdso.so.1 (0x00007ffefa6e9000)
        libm.so.6 => /usr/lib/libm.so.6 (0x00007fc0cd313000)
        libncursesw.so.6 => /usr/lib/libncursesw.so.6 (0x00007fc0cd8f4000)
        libacl.so.1 => /usr/lib/libacl.so.1 (0x00007fc0cd8eb000)
        libgpm.so.2 => /usr/lib/libgpm.so.2 (0x00007fc0cd8e3000)
        libc.so.6 => /usr/lib/libc.so.6 (0x00007fc0cd131000)
        /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fc0cd9a1000)
/bin/nvim:
        linux-vdso.so.1 (0x00007ffdefdbd000)
        libluv.so.1 => /usr/lib/libluv.so.1 (0x00007f190e1bc000)
        libtermkey.so.1 => /usr/lib/libtermkey.so.1 (0x00007f190e1b0000)
        libvterm.so.0 => /usr/lib/libvterm.so.0 (0x00007f190e19d000)
        libmsgpackc.so.2 => /usr/lib/libmsgpackc.so.2 (0x00007f190e194000)
        libtree-sitter.so.0 => /usr/lib/libtree-sitter.so.0 (0x00007f190e166000)
        libunibilium.so.4 => /usr/lib/libunibilium.so.4 (0x00007f190e151000)
        libluajit-5.1.so.2 => /usr/lib/libluajit-5.1.so.2 (0x00007f190e0be000)
        libm.so.6 => /usr/lib/libm.so.6 (0x00007f190db13000)
        libuv.so.1 => /usr/lib/libuv.so.1 (0x00007f190dadf000)
        libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f190daba000)
        libc.so.6 => /usr/lib/libc.so.6 (0x00007f190d8d8000)
        /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f190e223000)

mle uses:

  1. uthash for hash maps and linked lists
  2. termbox2 for text-based UI
  3. PCRE2 for syntax highlighting and search
  4. Lua as a macro language

Comparing file sizes of the executables for mle, nano, neovim, and vim:

$ ls -l /bin/mle /bin/nano /bin/vim /bin/nvim
-rwxr-xr-x 1 root root  298752 Oct 29 22:22 /bin/mle*
-rwxr-xr-x 1 root root  278856 Jan 18  2023 /bin/nano*
-rwxr-xr-x 1 root root 4795872 Oct 10 13:39 /bin/nvim*
-rwxr-xr-x 1 root root 4848056 Oct 26 22:39 /bin/vim*

4. Speed comparison. Below is a comparison of the starting times for a 187 MB sized file conducted in 2016 by Adam Saponara:

Version Command time in s
mle 1.0 mle -Qq bigfile 0.531
vim 7.4 vim -u NONE -c q bigfile 1.382

I tried two files, all stored in /tmp, which is in RAM on Arch Linux kernel 6.6.1:

  1. seq 999000 > x9, size is 6.6 MB, time wc x9 is 0.02s
  2. seq 9999000 > x9b, size is 76 MB, time wc x9b is 0.17s

Starting times for mle, vim, and neovim are as below:

Version Command real/s Command real/s
mle 1.7.2 mle -N -Qq x9 0.04 mle -N -Qq x9b 0.36
vim 9.0.2070 vim -u NONE -c q x9 0.04 vim -u NONE -c q x9b 0.36
neovim 0.9.4 nvim -u NONE -c q x9 0.05 nvim -u NONE -c q x9b 0.33

Apparently, the speed advantage cannot be reproduced with these two particular files. vim and neovim need roughly half of their time actually processing the file, as half of the time is needed for just reading the content, as can be seen by the wc times.

More technical details on benchmarking can be found here: Full soft-wrap implementation #77.

5. Basic usage. In the following we use below abbrevations for keys. As usual, you have to press them all at once.

Key Meaning
S Shift
M Alt (also called Meta)
MS Alt-Shift
C Ctrl
CS Ctrl-Shift
CM Ctrl-Alt
CMS Ctrl-Alt-Shift

Some basic file operations within the editor:

Task mle vi
Opening file C-o :r
Saving file C-s :w
Quit C-x :q
Help text F2

mle supports editing multiple files at once. Switching between buffers is by using M-1, M-2, M-3, etc. Photo

Once you press F2 (the help key) then automatically a new buffer is opened. To switch back to your original file you would use M-1.

6. Moving the cursor around. Below commands just move the cursor around and do not change the file content in any way.

Task mle vi
jump over word in right direction M-f w
jump over word in left direction M-b b
top, bottom, or center C-l
Search for string C-f /
Find next C-g n
Go to line M-g :
Set mark a (or b, etc.) M-za ma
Go to mark a M-za 'a
Go to last mark M-m

7. Copying, deleting, or moving text. Below commands change the content of the file.

Task mle vi
Cut marked text or whole line C-k y
Uncut, usually called paste C-u p
Indent with one tab M-. >>
Outdent one tab M-, <<
Delete word to the right M-d dw
Delete word to the left C-w bdw
Repeat last operation F5 .
Inserting output from shell M-e r!

When you start mle then whenever you enter a TAB, this will be changed to spaces. To change this behaviour you enter M-o a, and then enter y at the prompt. If you want to go back to the automatic tab to space conversion, then enter a number at the prompt.

8. Useful startup file. Below startup file for mle, also called rc file, named .mlerc, is located in the user's home directory:

-Kklm,,1
-kcmd_move_beginning,C-home,
-kcmd_move_end,C-end,
-nklm
-w1
-t8
-e1
-a0
     <empty line>

Ignore the first four lines for the moment. The meaning is as follows: enable word wrap (-w), set tabsize to 8 characters (-t), enable mouse support (-e), make tabs as tabs (-a). This is equivalent to start mle with below command line arguments:

mle -w1 -t8 -e1 -a0

If the startup file is executable then the output of the file is taken as actual rc file. So the startup can be changed conditionally.

9. Setting or redefining keys. mle allows you to set or redefine key bindings. This is a 3-step process.

  1. You define a so called kmap using command line option -K
  2. Within this kmap you specify pairs of commands and keys using option -k
  3. You instruct mle to use this new kmap with option -n

For example the standard mle key binding for jumping to the end of the file is M-/. In Google Chrome or many editors this is C-end. Specifying this is thus:

mle -K 'klm,,1' -k 'cmd_move_end,C-end,' -n klm <file>

10. Lua macros. Below table information is extracted from uscript.lua.

B B-M M-U
buffer_add_mark bview_new mark_find_bracket_top
buffer_add_mark_ex bview_open mark_find_next_re
buffer_add_srule bview_pop_kmap mark_find_next_str
buffer_apply_styles bview_push_kmap mark_find_prev_re
buffer_clear bview_rectify_viewport mark_find_prev_str
buffer_delete bview_remove_cursor mark_get_between
buffer_delete_w_bline bview_remove_cursors_except mark_get_char_after
buffer_destroy bview_resize mark_get_char_before
buffer_destroy_mark bview_set_syntax mark_get_nchars_between
buffer_get bview_set_viewport_y mark_get_offset
buffer_get_bline bview_split mark_insert_after
buffer_get_bline_col bview_wake_sleeping_cursors mark_insert_before
buffer_get_bline_w_hint bview_zero_viewport_y mark_is_after_col_minus_lefties
buffer_get_lettered_mark cursor_clone mark_is_at_bol
buffer_get_offset cursor_cut_copy mark_is_at_eol
buffer_insert cursor_destroy mark_is_at_word_bound
buffer_insert_w_bline cursor_drop_anchor mark_is_between
buffer_new cursor_get_anchor mark_is_eq
buffer_new_open cursor_get_lo_hi mark_is_gt
buffer_open cursor_get_mark mark_is_gte
buffer_redo cursor_lift_anchor mark_is_lt
buffer_redo_action_group cursor_replace mark_is_lte
buffer_register_append cursor_select_between mark_join
buffer_register_clear cursor_select_by mark_move_beginning
buffer_register_get cursor_select_by_bracket mark_move_bol
buffer_register_prepend cursor_select_by_string mark_move_bracket_pair
buffer_register_set cursor_select_by_word mark_move_bracket_pair_ex
buffer_remove_srule cursor_select_by_word_back mark_move_bracket_top
buffer_replace cursor_select_by_word_forward mark_move_bracket_top_ex
buffer_replace_w_bline cursor_toggle_anchor mark_move_by
buffer_save cursor_uncut mark_move_col
buffer_save_as editor_bview_edit_count mark_move_end
buffer_set editor_close_bview mark_move_eol
buffer_set_action_group_ptr editor_count_bviews_by_buffer mark_move_next_re
buffer_set_callback editor_destroy_observer mark_move_next_re_ex
buffer_set_mmapped editor_display mark_move_next_re_nudge
buffer_set_styles_enabled editor_force_redraw mark_move_next_str
buffer_set_tab_width editor_get_input mark_move_next_str_ex
buffer_substr editor_menu mark_move_next_str_nudge
buffer_undo editor_notify_observers mark_move_offset
buffer_undo_action_group editor_open_bview mark_move_prev_re
buffer_write_to_fd editor_prompt mark_move_prev_re_ex
buffer_write_to_file editor_register_cmd mark_move_prev_str
bview_add_cursor editor_register_observer mark_move_prev_str_ex
bview_add_cursor_asleep editor_set_active mark_move_to
bview_center_viewport_y mark_clone mark_move_to_w_bline
bview_destroy mark_clone_w_letter mark_move_vert
bview_draw mark_delete_after mark_replace
bview_draw_cursor mark_delete_before mark_replace_between
bview_get_active_cursor_count mark_delete_between mark_swap
bview_get_split_root mark_destroy util_escape_shell_arg
bview_max_viewport_y mark_find_bracket_pair util_shell_exec

11. Limitation. Unfortunately mle has some shortcomings.

  1. mle does not have a line-wrap functionality. So long lines do not wrap at the end of the screen. For source code files this is fine. But for Markdown files this is a severe restriction.
  2. This only happens on st: mle -Qk does not handle Shift-home and home differently, therefore you cannot define the combination of Shift-home to mean "go to top of page". This works perfectly fine on xterm.
  3. When cutting text out of file1, then this text is not available, when in file2 there is also text, which has been cut.
]]>
https://eklausmeier.goip.de/blog/2023/11-02-cloudflare-dashboard-down https://eklausmeier.goip.de/blog/2023/11-02-cloudflare-dashboard-down Cloudflare Dashboard Down Thu, 02 Nov 2023 18:00:00 +0100 This blog is self-hosted. One could think that this situation is particularly prone to outages. Actually, this hosting is quite stable compared to professional services. This to my own surprise.

Today Cloudflare has its snafu-moments:

Cloudflare is assessing a loss of power impacting data centres while simultaneously failing over services.

So even the biggest players in the pond suffer from power outage from time to time.

The following products are currently impacted at the data plane / edge level, meaning that the full product functionality is either partially or fully affected: Logpush, WARP / Zero Trust device posture, Cloudflare dashboard, Cloudflare API, Stream API, Workers API, Alert Notification System.

As I have a copy of this blog on Cloudflare Workers, I cannot update my blog, at least not today.

Photo

Uploading a zip-file also does not work.

Photo

Added 03-Nov-2023: Cloudflare was hit really hard. Now their dashboard is entirely unavailable.

Photo

Text:

The Cloudflare Dashboard is temporarily unavailable.

Please reload this page to try again. If the issue persists, please visit the Cloudflare Status page for up-to-date information regarding any ongoing issues.

It looks that they didn't fully realize how severe their problem really is.

Added 05-Nov-2023: On 04-Nov-2023 Matthew Prince, CEO of Cloudflare, gave a detailed post mortem of the incident. The text Post Mortem on Cloudflare Control Plane and Analytics Outage is worth reading multiple times. It teaches a number of important lessons in resilience. Here I will quote some snippets, party out of context, but highlighting some of the many problems.

The largest of the three facilities in Oregon is run by Flexential.

The mishap started as follows:

On November 2 at 08:50 UTC Portland General Electric (PGE), the utility company that services PDX-04, had an unplanned maintenance event affecting one of their independent power feeds into the building.

This happened without Cloudflare being aware of this. I had already speculated that Cloudflare was not really aware of their mess they were in:

Flexential did not inform Cloudflare that they had failed over to generator power. None of our observability tools were able to detect that the source of power had changed.

Things get worse quite quickly:

At approximately 11:40 UTC, there was a ground fault on a PGE transformer at PDX-04. ... Ground faults with high voltage (12,470 volt) power lines are very bad. Electrical systems are designed to quickly shut down to prevent damage when one occurs. Unfortunately, in this case, the protective measure also shut down all of PDX-04’s generators. This meant that the two sources of power generation for the facility — both the redundant utility lines as well as the 10 generators — were offline.

One mishap doesn't come alone:

PDX-04 also contains a bank of UPS batteries. ... the batteries started to fail after only 4 minutes.

As if it was written by a film author:

the overnight shift (from Flexential) consisted of security and an unaccompanied technician who had only been on the job for a week.

Data center went dark without Cloudflare knowing it:

Between 11:44 and 12:01 UTC, with the generators not fully restarted, the UPS batteries ran out of power and all customers of the data center lost power. Throughout this, Flexential never informed Cloudflare that there was any issue at the facility.

The rest of the text discusses why Cloudflare put some of their production products into a single data center, in this case the faulty one.

We were also far too lax about requiring new products and their associated databases to integrate with the high availability cluster.

But the nightmare was not over yet:

At 12:48 UTC, Flexential was able to get the generators restarted. ... When Flexential attempted to power back up Cloudflare's circuits, the circuit breakers were discovered to be faulty.

The next sentence tells you why you should always have enough spare replacement kits:

Flexential began the process of replacing the failed breakers. That required them to source new breakers because more were bad than they had on hand in the facility.

Thundering herd problem, also see Josep Stuhli On Scaling to 20 Million Users:

When services were turned up there, we experienced a thundering herd problem where the API calls that had been failing overwhelmed our services.

Always test your disaster recovery procedures, if not, then:

A handful of products did not properly get stood up on our disaster recovery sites. These tended to be newer products where we had not fully implemented and tested a disaster recovery procedure.

Cloudflare's data center reported normal operations:

Flexential replaced our failed circuit breakers, restored both utility feeds, and confirmed clean power at 22:48 UTC.

But for Cloudflare the ordeal was not over yet. The had to restart everything:

Beginning first thing on November 3, our team began restoring service in PDX-04. That began with physically booting our network gear then powering up thousands of servers and restoring their services. The state of our services in the data center was unknown as we believed multiple power cycles were likely to have occurred during the incident. Our only safe process to recover was to follow a complete bootstrap of the entire facility.

This was no small feat:

Rebuilding these took 3 hours.

Matthew Prince acknowledges that much is to be learnt from this event:

But we also must expect that entire data centers may fail. Google has a process, where when there’s a significant event or crisis, they can call a Code Yellow or Code Red. In these cases, most or all engineering resources are shifted to addressing the issue at hand.

We have not had such a process in the past, but it’s clear today we need to implement a version of it ourselves

Little gold nugget from the lessons learnt:

Test the blast radius of system failures

Murphy's law applies universally, in particular for software and data centers.

]]>
https://eklausmeier.goip.de/blog/2023/10-30-david-heinemeier-hansson-on-cloud-computing https://eklausmeier.goip.de/blog/2023/10-30-david-heinemeier-hansson-on-cloud-computing David Heinemeier Hansson on Cloud Computing Mon, 30 Oct 2023 14:00:00 +0100 This is in continuation of Nima Badizadegan on Cloud Computing. David Heinemeier Hansson, creator of Ruby on Rails, cofounder of the HEY e-mail service, prolific writer, made a number of noticeable remarks on cloud costs.

David Heinemeier Hansson from 37signals.com wrote We stand to save $7m over five years from our cloud exit. Also see Dropbox slips 500PB into its Magic Pocket, not spread over AWS: "Shifts 90% of your files from Amazon to in-house systems".

Heinemeier adds on 23-Jun-2023:

The back of the napkin math is that we'll save at least $1.5 million per year by owning our own hardware rather than renting it from Amazon. And crucially, we've been able to do this without changing the size of the operations team at all. Running our applications in the cloud just never provided the promised productivity gains to do with any smaller of a team anyway.

The main difference here is the lag time between needing new servers and seeing them online. It truly is incredible that you can spin up 100 powerful machines in the cloud in just a few minutes, but you also pay dearly for the privilege. And we just don't have such an unpredictable business as to warrant this premium. Given how much money we're saving owning our own hardware, we can afford to dramatically over-provision our server needs, and then when we need more, it still only takes a couple of weeks to show up.

Look at it this way. We spent about half a million dollars buying two pallets of servers from Dell, which added a combined 4,000 vCPUs with 7,680 GB of RAM and 384TB of NVMe storage to our server capacity. This hardware was more than adequate to run all the heritage services we brought home, together with HEY, and give our other Basecamp operations a hardware refresh. And it was less than a third the cost of what we predict we'll be saving EVERY YEAR! This is hardware we'll be amortizing over five years.

David Heinemeier Hansson shows that even after one year operation costs went down by $1m.

Photo

Our cloud spend (sans-S3) is down by 60% already. From around $180,000/month to less than $80,000. That's a cool million dollars in savings at the yearly run rate

In his post X celebrates 60% savings from cloud exit on 27-Oct-2023 he cites the Twitter/X engineering team:

Optimized our usage of cloud service providers and began doing much more on-prem. This shift has reduced our monthly cloud costs by 60%. Among the changes we made was a shift of all media/blob artifacts out of the cloud, which reduced our overall cloud data storage size by 60%, and separately, we succeeded in reducing cloud data processing costs by 75%.

Further:

According to earlier reports, X was spending $100 million per year with AWS, so if we take that as a base, they're on track to save $60m/year from the cloud exit achievements so far. Wild!

Added 04-Jan-2024: David Heinemeier Hansson added a FAQ here: The Big Cloud Exit FAQ.

Added 07-Jan-2024: David Heinemeier Hannson added this post Keeping the lights on while leaving the cloud. One single quote:

You don’t need the cloud to get good uptimes. You need mature technologies run on redundant hardware with good backups. Same as it ever was.

Simple, but true.

]]>
https://eklausmeier.goip.de/blog/2023/10-29-simplified-saaze-monitored-with-phpspy https://eklausmeier.goip.de/blog/2023/10-29-simplified-saaze-monitored-with-phpspy Simplified Saaze Monitored with PHPSPY Sun, 29 Oct 2023 21:35:00 +0100 This blog uses the PHP-based Simplified Saaze software. I measured Simplified Saaze using XHProf:

  1. Profiling PHP Programs
  2. Profiling PHP Programs #2

Still I am interested whether I missed anything.

In multiple talks Rasmus Lerdorf, the creator of PHP, advertises PHPSPY.

PHPSPY was written by Adam Saponara. The source code is in GitHub: https://github.com/adsr/phpspy.

I ran PHPSPY in top mode for some days using the dynamic mode of Simplified Saaze: phpspy -p 940 -p 17132 -p 61898 -p 61899 -t. The output is below. Some remarks on inclusive and exclusive times or counts:

  1. Inclusive counts everything for the function and all its function it calls.
  2. Exclusive only counts a particular function.
phpspy -p 940 -p 17132 -p 61898 -p 61899 -@
samp_count=666  err_count=10  func_count=67

tincl      texcl      incl       excl       excl%   func
313        151        0          0          0.00    ComposerAutoloaderInit50920a90746408ba7a500bacdb4908c1::getLoader /home/klm/php/sndsaaze/vendor/composer/autoload_real.php:19
132        103        0          0          0.00    composerRequire50920a90746408ba7a500bacdb4908c1 /home/klm/php/sndsaaze/vendor/composer/autoload_real.php:50
99         99         0          0          0.00    Composer\Autoload\includeFile /home/klm/php/sndsaaze/vendor/composer/ClassLoader.php:569
76         76         0          0          0.00    json_decode <internal>:-1
298        34         0          0          0.00    Saaze\Saaze::run /home/klm/php/sndsaaze/vendor/eklausme/saaze/Saaze.php:32
30         30         0          0          0.00    ComposerAutoloaderInit50920a90746408ba7a500bacdb4908c1::loadClassLoader /home/klm/php/sndsaaze/vendor/composer/autoload_real.php:9
23         23         0          0          0.00    FFI::cdef <internal>:-1
19         19         0          0          0.00    file_get_contents <internal>:-1
15         15         0          0          0.00    <main> /home/klm/php/sndsaaze/vendor/symfony/polyfill-mbstring/bootstrap.php:1
13         13         0          0          0.00    md4c_toHtml <internal>:-1
14         11         0          0          0.00    str_word_count <internal>:-1
10         10         0          0          0.00    yaml_parse <internal>:-1
322        9          0          0          0.00    <main> /home/klm/php/sndsaaze/vendor/autoload.php:1
90         9          0          0          0.00    Saaze\TemplateManager::<main> /home/klm/php/sndsaaze/templates/blog/entry.php:1
8          8          0          0          0.00    <main> /home/klm/php/sndsaaze/vendor/symfony/polyfill-intl-grapheme/bootstrap.php:1
5          5          0          0          0.00    FFI::string <internal>:-1
653        4          0          0          0.00    <main> /home/klm/php/sndsaaze/public/index.php:1
5          4          0          0          0.00    Saaze\TemplateManager::<main> /home/klm/php/sndsaaze/templates/error.php:1
5          3          0          0          0.00    microtime <internal>:-1
4          3          0          0          0.00    strpos <internal>:-1
3          3          0          0          0.00    <main> /home/klm/php/sndsaaze/vendor/symfony/polyfill-ctype/bootstrap.php:1
3          3          0          0          0.00    shell_exec <internal>:-1
27         2          0          0          0.00    Saaze\CollectionArray::loadCollections /home/klm/php/sndsaaze/vendor/eklausme/saaze/CollectionArray.php:27
21         2          0          0          0.00    <main> <internal>:-1
10         2          0          0          0.00    is_dir <internal>:-1
9          2          0          0          0.00    Saaze\Collection::__construct /home/klm/php/sndsaaze/vendor/eklausme/saaze/Collection.php:15
9          2          0          0          0.00    Saaze\TemplateManager::renderError /home/klm/php/sndsaaze/vendor/eklausme/saaze/TemplateManager.php:62
4          2          0          0          0.00    scandir <internal>:-1
3          2          0          0          0.00    strlen <internal>:-1
2          2          0          0          0.00    Saaze\TemplateManager::<main> /home/klm/php/sndsaaze/templates/top-layout.php:1
2          2          0          0          0.00    Saaze\MarkdownContentParser::inlineMath /home/klm/php/sndsaaze/vendor/eklausme/saaze/MarkdownContentParser.php:172
2          2          0          0          0.00    strip_tags <internal>:-1
23         1          0          0          0.00    Saaze\MarkdownContentParser::toHtml /home/klm/php/sndsaaze/vendor/eklausme/saaze/MarkdownContentParser.php:562
9          1          0          0          0.00    substr <internal>:-1
1          1          0          0          0.00    substr_replace <internal>:-1
1          1          0          0          0.00    usort <internal>:-1
1          1          0          0          0.00    printf <internal>:-1
1          1          0          0          0.00    <main> /home/klm/php/sndsaaze/vendor/symfony/polyfill-intl-normalizer/bootstrap.php:1
1          1          0          0          0.00    ob_end_clean <internal>:-1
1          1          0          0          0.00    str_replace <internal>:-1
1          1          0          0          0.00    file_put_contents <internal>:-1
1          1          0          0          0.00    max <internal>:-1
1          1          0          0          0.00    is_readable <internal>:-1
111        0          0          0          0.00    Saaze\Collection::loadMkdwnRecursive /home/klm/php/sndsaaze/vendor/eklausme/saaze/Collection.php:70
91         0          0          0          0.00    Saaze\TemplateManager::renderEntry /home/klm/php/sndsaaze/vendor/eklausme/saaze/TemplateManager.php:37

Interestingly, the time spent by Composer-classes is greater than the actual runtime of Simplified Saaze!

Added 11-Dec-2023: Measured once again. Results are below.

phpspy -p 879 -p 1015 -p 1016 -p 20333 -@
samp_count=2422  err_count=55  func_count=97

tincl      texcl      incl       excl       excl%   func
1077       491        0          0          0.00    ComposerAutoloaderInit50920a90746408ba7a500bacdb4908c1::getLoader /home/klm/php/sndsaaze/vendor/composer/autoload_real.php:19
506        369        0          0          0.00    composerRequire50920a90746408ba7a500bacdb4908c1 /home/klm/php/sndsaaze/vendor/composer/autoload_real.php:50
353        353        0          0          0.00    json_decode <internal>:-1
335        335        0          0          0.00    Composer\Autoload\includeFile /home/klm/php/sndsaaze/vendor/composer/ClassLoader.php:569
81         81         0          0          0.00    md4c_toHtml <internal>:-1
76         76         0          0          0.00    ComposerAutoloaderInit50920a90746408ba7a500bacdb4908c1::loadClassLoader /home/klm/php/sndsaaze/vendor/composer/autoload_real.php:9
75         75         0          0          0.00    <main> /home/klm/php/sndsaaze/vendor/symfony/polyfill-mbstring/bootstrap.php:1
459        69         0          0          0.00    Saaze\TemplateManager::<main> /home/klm/php/sndsaaze/templates/blog/entry.php:1
67         67         0          0          0.00    FFI::cdef <internal>:-1
58         58         0          0          0.00    file_get_contents <internal>:-1
48         35         0          0          0.00    str_word_count <internal>:-1
1162       29         0          0          0.00    Saaze\Saaze::run /home/klm/php/sndsaaze/vendor/eklausme/saaze/Saaze.php:32
28         28         0          0          0.00    <main> /home/klm/php/sndsaaze/vendor/symfony/polyfill-intl-grapheme/bootstrap.php:1
26         26         0          0          0.00    yaml_parse <internal>:-1
1096       23         0          0          0.00    <main> /home/klm/php/sndsaaze/vendor/autoload.php:1
22         20         0          0          0.00    scandir <internal>:-1
21         20         0          0          0.00    strpos <internal>:-1
491        19         0          0          0.00    Saaze\TemplateManager::renderEntry /home/klm/php/sndsaaze/vendor/eklausme/saaze/TemplateManager.php:37
21         16         0          0          0.00    microtime <internal>:-1
16         16         0          0          0.00    <main> /home/klm/php/sndsaaze/vendor/symfony/polyfill-ctype/bootstrap.php:1
32         15         0          0          0.00    substr <internal>:-1
74         14         0          0          0.00    <main> <internal>:-1
2384       12         0          0          0.00    <main> /home/klm/php/sndsaaze/public/index.php:1
15         12         0          0          0.00    Saaze\MarkdownContentParser::inlineMath /home/klm/php/sndsaaze/vendor/eklausme/saaze/MarkdownContentParser.php:172
12         12         0          0          0.00    strip_tags <internal>:-1
20         11         0          0          0.00    is_dir <internal>:-1
12         10         0          0          0.00    Saaze\TemplateManager::<main> /home/klm/php/sndsaaze/templates/top-layout.php:1
7          7          0          0          0.00    str_replace <internal>:-1
260        6          0          0          0.00    Saaze\Collection::loadEntry /home/klm/php/sndsaaze/vendor/eklausme/saaze/Collection.php:82
159        6          0          0          0.00    Saaze\MarkdownContentParser::toHtml /home/klm/php/sndsaaze/vendor/eklausme/saaze/MarkdownContentParser.php:562
8          6          0          0          0.00    FFI::string <internal>:-1
6          6          0          0          0.00    shell_exec <internal>:-1
233        5          0          0          0.00    Saaze\Entry::getContentAndExcerpt /home/klm/php/sndsaaze/vendor/eklausme/saaze/Entry.php:86
6          5          0          0          0.00    <main> /home/klm/php/sndsaaze/vendor/symfony/polyfill-intl-normalizer/bootstrap.php:1
5          5          0          0          0.00    function_exists <internal>:-1
71         4          0          0          0.00    Saaze\CollectionArray::loadCollections /home/klm/php/sndsaaze/vendor/eklausme/saaze/CollectionArray.php:27
50         4          0          0          0.00    Saaze\Entry::parseEntry /home/klm/php/sndsaaze/vendor/eklausme/saaze/Entry.php:21
6          4          0          0          0.00    Saaze\TemplateManager::<main> /home/klm/php/sndsaaze/templates/error.php:1
6          4          0          0          0.00    is_readable <internal>:-1
21         3          0          0          0.00    Saaze\Collection::__construct /home/klm/php/sndsaaze/vendor/eklausme/saaze/Collection.php:15
5          3          0          0          0.00    Saaze\TemplateManager::<main> <internal>:-1
4          3          0          0          0.00    Saaze\MarkdownContentParser::myTag /home/klm/php/sndsaaze/vendor/eklausme/saaze/MarkdownContentParser.php:212
3          3          0          0          0.00    substr_replace <internal>:-1
3          3          0          0          0.00    max <internal>:-1
3          3          0          0          0.00    Saaze\MarkdownContentParser::displayMath /home/klm/php/sndsaaze/vendor/eklausme/saaze/MarkdownContentParser.php:145
3          3          0          0          0.00    dirname <internal>:-1
3          3          0          0          0.00    getenv <internal>:-1
3          3          0          0          0.00    rtrim <internal>:-1
549        2          0          0          0.00    Saaze\Collection::loadMkdwnRecursive /home/klm/php/sndsaaze/vendor/eklausme/saaze/Collection.php:70
72         2          0          0          0.00    Saaze\Config::init /home/klm/php/sndsaaze/vendor/eklausme/saaze/Config.php:14
20         2          0          0          0.00    Saaze\Collection::parseCollection /home/klm/php/sndsaaze/vendor/eklausme/saaze/Collection.php:22
16         2          0          0          0.00    Saaze\MarkdownContentParser::gallery /home/klm/php/sndsaaze/vendor/eklausme/saaze/MarkdownContentParser.php:397
5          2          0          0          0.00    date <internal>:-1
3          2          0          0          0.00    strlen <internal>:-1
2          2          0          0          0.00    ltrim <internal>:-1
2          2          0          0          0.00    Saaze\Collection::Saaze\{closure} /home/klm/php/sndsaaze/vendor/eklausme/saaze/Collection.php:54
2          2          0          0          0.00    Saaze\TemplateManager::templateExists /home/klm/php/sndsaaze/vendor/eklausme/saaze/TemplateManager.php:7
2          2          0          0          0.00    Saaze\MarkdownContentParser::twitter /home/klm/php/sndsaaze/vendor/eklausme/saaze/MarkdownContentParser.php:333
2          2          0          0          0.00    fopen <internal>:-1
2          2          0          0          0.00    Composer\Autoload\ClassLoader::register /home/klm/php/sndsaaze/vendor/composer/ClassLoader.php:389
2          2          0          0          0.00    strtotime <internal>:-1
192        1          0          0          0.00    Saaze\Entry::__construct /home/klm/php/sndsaaze/vendor/eklausme/saaze/Entry.php:13
10         1          0          0          0.00    Saaze\Entry::getUrl /home/klm/php/sndsaaze/vendor/eklausme/saaze/Entry.php:74
4          1          0          0          0.00    ob_end_clean <internal>:-1

Results are similar. Clearly even more accentuating the importance of Composer.

Added 08-Feb-2024: I had some trouble getting PHPSPY to work again, see phpspy no longer works #136. I ran PHPSPY again but this time I dropped composer, which was marked as dominant in above PHPSPY sessions. Results for three days are as below.

phpspy -H 9999 --pgrep=php-fpm -@
samp_count=2651  err_count=575856  func_count=58

tincl      texcl      incl       excl       excl%   func
1047       850        0          0          0.00    FFI::string <internal>:-1
230        207        0          0          0.00    substr <internal>:-1
209        206        0          0          0.00    str_replace <internal>:-1
212        194        0          0          0.00    <main> <internal>:-1
269        193        0          0          0.00    max <internal>:-1
200        185        0          0          0.00    strpos <internal>:-1
147        147        0          0          0.00    ctype_space <internal>:-1
102        99         0          0          0.00    printf <internal>:-1
82         82         0          0          0.00    rtrim <internal>:-1
68         68         0          0          0.00    json_decode <internal>:-1
123        43         0          0          0.00    date <internal>:-1
42         41         0          0          0.00    strlen <internal>:-1
30         30         0          0          0.00    strip_tags <internal>:-1
28         28         0          0          0.00    md4c_toHtml <internal>:-1
47         23         0          0          0.00    str_word_count <internal>:-1
21         21         0          0          0.00    preg_match <internal>:-1
18         18         0          0          0.00    str_contains <internal>:-1
17         16         0          0          0.00    implode <internal>:-1
20         15         0          0          0.00    Saaze\TemplateManager::<main> <internal>:-1
15         15         0          0          0.00    strrpos <internal>:-1
14         14         0          0          0.00    is_dir <internal>:-1
13         13         0          0          0.00    define <internal>:-1
58         12         0          0          0.00    microtime <internal>:-1
12         12         0          0          0.00    getenv <internal>:-1
11         11         0          0          0.00    sprintf <internal>:-1
11         11         0          0          0.00    substr_replace <internal>:-1
11         10         0          0          0.00    FFI::cdef <internal>:-1
9          9          0          0          0.00    str_split <internal>:-1
8          8          0          0          0.00    function_exists <internal>:-1
14         7          0          0          0.00    urlencode <internal>:-1
7          7          0          0          0.00    DateTime::__construct <internal>:-1
6          6          0          0          0.00    yaml_parse <internal>:-1
13         5          0          0          0.00    is_array <internal>:-1
5          5          0          0          0.00    ltrim <internal>:-1
4          3          0          0          0.00    explode <internal>:-1
3          3          0          0          0.00    is_readable <internal>:-1
3          3          0          0          0.00    preg_replace <internal>:-1
3          3          0          0          0.00    extension_loaded <internal>:-1
3          3          0          0          0.00    str_starts_with <internal>:-1
3          3          0          0          0.00    spl_autoload_register <internal>:-1
3          3          0          0          0.00    trim <internal>:-1
3          2          0          0          0.00    count <internal>:-1
2          2          0          0          0.00    is_string <internal>:-1
2          2          0          0          0.00    is_bool <internal>:-1
1          1          0          0          0.00    round <internal>:-1
1          1          0          0          0.00    error_reporting <internal>:-1
1          1          0          0          0.00    print_r <internal>:-1
1          1          0          0          0.00    header <internal>:-1
1          1          0          0          0.00    mb_strtolower <internal>:-1
1          1          0          0          0.00    array_key_exists <internal>:-1
1          1          0          0          0.00    spl_autoload_unregister <internal>:-1
1          1          0          0          0.00    gettype <internal>:-1
1          1          0          0          0.00    mb_substr <internal>:-1
1          1          0          0          0.00    ucwords <internal>:-1
1          1          0          0          0.00    openssl_cipher_iv_length <internal>:-1
1          1          0          0          0.00    file_exists <internal>:-1
1          1          0          0          0.00    defined <internal>:-1
1          0          0          0          0.00    is_object <internal>:-1

At a later time:

phpspy -H 9999 --pgrep=php-fpm -@
samp_count=5475  err_count=1004659  func_count=64

tincl      texcl      incl       excl       excl%   func
2301       1842       0          0          0.00    FFI::string <internal>:-1
462        458        0          0          0.00    str_replace <internal>:-1
491        447        0          0          0.00    substr <internal>:-1
467        431        0          0          0.00    strpos <internal>:-1
556        414        0          0          0.00    max <internal>:-1
353        310        0          0          0.00    <main> <internal>:-1
299        299        0          0          0.00    ctype_space <internal>:-1
207        203        0          0          0.00    printf <internal>:-1
145        142        0          0          0.00    rtrim <internal>:-1
122        122        0          0          0.00    json_decode <internal>:-1
88         85         0          0          0.00    strlen <internal>:-1
196        57         0          0          0.00    date <internal>:-1
56         56         0          0          0.00    md4c_toHtml <internal>:-1
55         55         0          0          0.00    strip_tags <internal>:-1
88         47         0          0          0.00    str_word_count <internal>:-1
46         46         0          0          0.00    preg_match <internal>:-1
44         44         0          0          0.00    str_contains <internal>:-1
118        39         0          0          0.00    microtime <internal>:-1
30         30         0          0          0.00    strrpos <internal>:-1
29         29         0          0          0.00    is_dir <internal>:-1
25         25         0          0          0.00    substr_replace <internal>:-1
27         24         0          0          0.00    implode <internal>:-1
29         20         0          0          0.00    Saaze\TemplateManager::<main> <internal>:-1
19         19         0          0          0.00    sprintf <internal>:-1
19         19         0          0          0.00    getenv <internal>:-1
27         16         0          0          0.00    urlencode <internal>:-1
16         16         0          0          0.00    str_split <internal>:-1
18         14         0          0          0.00    FFI::cdef <internal>:-1
14         14         0          0          0.00    define <internal>:-1
31         13         0          0          0.00    is_array <internal>:-1
13         13         0          0          0.00    ltrim <internal>:-1

Eliminating composer indeed cut away all composer related CPU usage. It looks that above run cannot be fully compared to the two previous runs. This time FFI::string became dominant, which was not dominant in previous runs. No PHP source code from Simplified Saaze is visible except Saaze\TemplateManager::<main>. Instead string-processing, like substr(), str_replace(), and strpos() seem to be important.

I am still hesitant how much I can trust above run as the err_count is so alarmingly high.

]]>
https://eklausmeier.goip.de/blog/2023/10-23-pagefind-searching-in-static-sites https://eklausmeier.goip.de/blog/2023/10-23-pagefind-searching-in-static-sites Pagefind: Searching in Static Sites Mon, 23 Oct 2023 19:45:00 +0200 Pagefind is a JavaScript library, which you add to your static site. By that you then have complete search-functionality. Pagefind has the following advantages over other JavaScript libraries:

  1. Easy to install, no JavaScript dependency hell.
  2. Easy to add the CSS and the two lines with <script> tag.
  3. Creating the index is easy and reasonable quick.

Pagefind was mainly written by Liam Bigelow from New Zealand and is promoted by CloudCannon. It is open source. It is written in Rust and JavaScript.

Language kLOC #files
Rust 36 63
JavaScript 2 20

1. One-time installation. Installing Pagefind is just downloading a single binary from GitHub: select the proper binary for Apple, Linux, or Windows. In my case I used pagefind-v1.0.3-x86_64-unknown-linux-musl.tar.gz for Arch Linux. Unpack with

tar zxf pagefind-v1.0.3-x86_64-unknown-linux-musl.tar.gz

Unpacking the 10 MB archive will create a 22 MB exectuable, which is statically linked and therefore has no dependencies. That's it.

2. Add CSS and JavaScript to template. Add below CSS and JavaScript reference to your template file outside of <body>:

<link href="/pagefind/pagefind-ui.css" rel="stylesheet">
<script src="/pagefind/pagefind-ui.js"></script>
<script>
    window.addEventListener('DOMContentLoaded', (event) => {
        new PagefindUI({ element: "#search", showSubResults: true });
    });
</script>

Then add the actual search dialog in your template inside <body>, in my case to top-layout.php:

<div id="search"></div>

3. Creating index files. This step must repeated whenever you have new content, or rename files. It does not need to be repeated whenever you regenerate your static HTML files. Altough if you want to play safe, you can do just that. Index creation is using the above mentioned executable pagefind. Running this command shows all the options:

$ pagefind -h
Implement search on any static website.

Usage: pagefind [OPTIONS]

Options:
  -s, --site <SITE>
          The location of your built static website
      --output-subdir <OUTPUT_SUBDIR>
          Where to output the search bundle, relative to the processed site
      --output-path <OUTPUT_PATH>
          Where to output the search bundle, relative to the working directory of the command
      --root-selector <ROOT_SELECTOR>
          The element Pagefind should treat as the root of the document. Usually you will want to use the data-pagefind-body attribute instead.
      --exclude-selectors <EXCLUDE_SELECTORS>
          Custom selectors that Pagefind should ignore when indexing. Usually you will want to use the data-pagefind-ignore attribute instead.
      --glob <GLOB>
          The file glob Pagefind uses to find HTML files. Defaults to "**/*.{html}"
      --force-language <FORCE_LANGUAGE>
          Ignore any detected languages and index the whole site as a single language. Expects an ISO 639-1 code.
      --serve
          Serve the source directory after creating the search index
  -v, --verbose
          Print verbose logging while indexing the site. Does not impact the web-facing search.
  -l, --logfile <LOGFILE>
          Path to a logfile to write to. Will replace the file on each run
  -k, --keep-index-url
          Keep "index.html" at the end of search result paths. Defaults to false, stripping "index.html".
  -h, --help
          Print help
  -V, --version
          Print version

This blog uses Simplified Saaze. In the case of Simplified Saaze I generate static files like this:

php saaze -mortb /tmp/build

This builds all static files in /tmp/build, which happens to be in a RAM disk on Arch Linux. Then change to this directory and issue

$ time pagefind -s . --exclude-selectors aside --exclude-selectors footer --force-language=en

Running Pagefind v1.0.3
Running from: "/tmp/build"
Source:       ""
Output:       "pagefind"

[Walking source directory]
Found 555 files matching **/*.{html}

[Parsing files]
Did not find a data-pagefind-body element on the site.
↳ Indexing all <body> elements on the site.

[Reading languages]
Discovered 1 language: en

[Building search indexes]
Total:
  Indexed 1 language
  Indexed 555 pages
  Indexed 33129 words
  Indexed 0 filters
  Indexed 0 sorts

Finished in 1.618 seconds
        real 1.65s
        user 1.49s
        sys 0
        swapped 0
        total space 0

The command

pagefind -s . --force-language=en

would habe been enough in many cases. In my special case I want to exclude content, which resides between <aside> and </aside>, and similarly between <footer> and </footer>.

The option --force-language=en is required in my case as I have English and German posts. Without this option pagefind would create two distinct indexes: You can then either only search in one language but not in the other. By forcing the language pagefind puts everything into a single index. See Multilingual search.

Indexing creates a directory called pagefind. Just copy this directory to your web-server during deployment. This directory looks something like this:

pagefind
├── fragment
│   ├── en_0933ef4.pf_fragment
│   ├── en_100be25.pf_fragment
│   ├── en_10b07a1.pf_fragment
│   ├── . . .
│   └── en_fef8cdb.pf_fragment
├── index
│   ├── en_22c87b9.pf_index
│   ├── en_26afa46.pf_index
│   ├── en_2a80efb.pf_index
│   ├── . . .
│   └── en_fde0a3b.pf_index
├── pagefind.en_d6828bd6ef.pf_meta
├── pagefind-entry.json
├── pagefind.js
├── pagefind-modular-ui.css
├── pagefind-modular-ui.js
├── pagefind-ui.css
├── pagefind-ui.js
├── wasm.en.pagefind
└── wasm.unknown.pagefind

3 directories, 596 files

These files in index are usually around 40KB each, those in fragment are usually around 1-10 KB each. The JavaScript totals 100KB, CSS is less than 20KB.

4. Network traffic. Pagefind was particularly designed to only load small amounts of data over the network. This can be seen from below diagram.

This makes Pagefind particularly attractive performancewise.

5. Using Pagefind as user. Using Pagefind as user is intuitive and needs no further explanation. This blog has Pagefind integrated into every page as of now. Just type a word you want to search, then results will pop-up almost instantly. This instant reaction is no surprise as the actual searching is done in the browser.

There is one slight limitation of Pagefind: currently you cannot search for word groups. I.e., consider Shakespeare's Hamlet:

To be, or not to be, that is the question

Searching for to or be would likely give you lots of results, but probably not the ones you are looking for. Clearly not a problem for this blog, as I do not have lyrics here.

]]>
https://eklausmeier.goip.de/blog/2023/10-02-converting-journal-article-from-latex-to-markdown https://eklausmeier.goip.de/blog/2023/10-02-converting-journal-article-from-latex-to-markdown Converting Journal Article from LaTeX to Markdown Mon, 02 Oct 2023 16:00:00 +0200 1. Problem statement. You have a scientific journal article in LaTeX format on arXiv but want it in Markdown format for a personal blog. In our case we take the article "A Parsec-Scale Galactic 3D Dust Map out to 1.25 kpc from the Sun" from Gordian Edenhofer et al. The original paper is here: https://arxiv.org/abs/2308.01295

If the article is in Markdown format, it can then be easily transformed into HTML. Having an article in Markdown format has a number of advantages over having the article in LaTeX format:

  1. It is much easier to write Markdown than LaTeX
  2. Reading HTML is easier than reading a PDF
  3. The notion of page, i.e., paper sized page, does not have a good meaning in the world of smartphones, tablet, etc.

Of course, the math in the LaTeX document will be converted to MathJax.

2. Overview of the content of the scientific article. The article briefly describes the importance of dust:

Interstellar dust comprises only 1% of the interstellar medium by mass, but absorbs and re-radiates more than 30 of starlight at infrared wavelengths. As such, dust plays an outsized role in the evolution of galaxies, catalyzing the formation of molecular hydrogen, shielding complex molecules from the UV radiation field, coupling the magnetic field to interstellar gas, and regulating the overall heating and cooling of the interstellar medium.

Dust's ability to scatter and absorb starlight is precisely the reason why we can probe it in three spatial dimensions.

A novel $\cal O(n)$ method called Iterative Charted Refinement (ICR) was used to analyze the more than 122 billion of data from the Gaia mission.

Photo

The algorithm ran for 4 weeks using the SLURM workload manager.

We employ a new Python framework called NIFTy.re for deploying NIFTy models to GPUs. NIFTy.re is part of the NIFTy Python package and internally uses JAX to run models on the GPU. We are able to speed up the evaluation of the value and gradient of ... by two orders of magnitude by transitioning from CPUs to GPUs. Our reconstruction ran on a single NVIDIA A100 GPU with 80 GB of memory for about four weeks.

Needless to say, this 4 week run was only one of the very many runs to actually produce the final result.

The result is a 3D dust map

achieving an angular resolution of ${14'}$ ($N_\text{side}=256$). We sample the dust extinction in 516 distance bins spanning 69 pc to 1250 pc. We obtain a maximum distance resolution of 0.4pc at 69pc and a minimum distance resolution of 7pc at 1.25 kpc.

3. Solution. Initially a Pandoc approach was tried. Pandoc and all its dependencies on Arch Linux needs more than half GB (Gigabyte!) of space, just for the installation. After installation the Pandoc approach even failed.

Perl, the workhorse, had to do the job again. For the conversion I created two Perl scripts:

  1. blogparsec: converts main.tex, i.e., the actual paper
  2. blogbibtex: converts the Bibtex-formatted file literature.bib

Using those two script, creating the Markdown file goes like this:

blogparsec main.tex > 08-03-a-parsec-scale-galactic-3d-dust-map-out-to-1-25-kpc-from-the-sun.md
blogbibtex literature.bib >> 08-03-a-parsec-scale-galactic-3d-dust-map-out-to-1-25-kpc-from-the-sun.md

This file still needs some manual editing. One prominent case is moving the table-of-content to the top, as this is appended at the end.

4. blogparsec script. Some notes on this Perl script. The input to this script is the actual LaTeX text with all the formulas etc.

First define some variables and use strict mode.

#!/bin/perl -W
# Convert paper in "Astronomy & Astrophysics" LaTeX format to something resembling Markdown
# Manual post-processing is still necessary but a lot easier

use strict;
my ($ignore,$sectionCnt,$subSectionCnt,$replaceAlgo,$replaceTable) = (1,0,0,0,0);
my (@sections) = ();

The frontmatter header is a simple here-document:

print <<'EOF';
---
date: "2023-08-03 14:00:00"
title: "A Parsec-Scale Galactic 3D Dust Map out to 1.25 kpc from the Sun"
description: "A 3D map of the spatial distribution of interstellar dust extinction out to a distance of 1.25 kpc from the Sun"
MathJax: true
categories: ["mathematics", "astronomy"]
tags: ["interstellar dust", "interstellar medium", "Milky Way", "Gaia", "Gaussian processes", "Bayesian inference"]
---

EOF

The main loop looks at each line in main.tex. After the loop the literature section is added, then all sections collected so far are printed.

while (<>) {
    $ignore = 0 if (/\\author\{Gordian~Edenhofer/);
    next if ($ignore);

    (...)

    print;

    print "\$\$\n" if (/(\\end\{equation\}|\\end\{align\})/);	# enclose with $$ #2
}


print "## Literature<a id=Literature></a>\n";
for (@sections) {
    print $_ . "\n";
}
++$sectionCnt;
print "- [$sectionCnt. Literature](#Literature)\n";

What follows is the part which is marked as (...) in above code.

Here is the special case for processing algorithm and tables in the paper: the algorithm is simply a screenshot of the original PDF, the table is a here-document:


    # In this particular case we replace the two algorithms with a corresponding screenshot
    if (/^\\begin\{algorithm/) {
        $replaceAlgo = 1;
        next;
    } elsif (/^\s+Pseudocode for ICR creating a GP/) {
        s/^(\s+)//;
        s/(\\left|right)\\/$1\\\\/g;	# probably MathJax bug
        $replaceAlgo = 0;
        print "![](*<?=\$rbase?>*/img/parsec_res/Algorithm1.webp)\n\n";
    } elsif (/^\s+Pseudocode for our expansion point variational/) {
        s/^(\s+)//;
        $replaceAlgo = 0;
        print "![](*<?=\$rbase?>*/img/parsec_res/Algorithm2.webp)\n\n";
    } elsif ($replaceAlgo == 1) { next; }

    if (/^\\begin\{table/) {
        $replaceTable = 1;
        next;
    } elsif (/^\\end\{table/) {
        $replaceTable = 0;
        print <<'EOF';

Parameters of the prior distributions.
The parameters $s$, $\mathrm{scl}$, and $\mathrm{off}$ fully determine $\rho$.
They are jointly chosen to a prior yield the kernel reconstructed in [Leike2020][].



 Name | Distribution | Mean | Standard Deviation | Degrees of Freedom
 -----|--------------|------|--------------------|--------------------
_s_   | Normal       | 0.0  | Kernel from [Leike2020][] | 786,432 &times; 772
scl   | Log-Normal   | 1.0  | 0.5                |  1
off   |  Normal      | $-6.91\left(\approx\ln10^{-3}\right)$ <br>prior median extinction <br>from [Leike2020][] | 1.0 | 1
      |              |      | Shape Parameter    | Scale Parameter  
$n_\sigma$ | Inverse Gamma | 3.0 | 4.0 | #Stars = 53,880,655

EOF
        next;
    } elsif ($replaceTable == 1) { next; }

The header with its authors and institutions needs some extra handling:

s/^\\(author|institute)\{/\n<p>\u$1s:<\/p>\n\n1. /;

s/\~/ /g;

# Authors, institutions, abstract, etc.
s/\(\\begin\{CJK\*.+?CJK\*\}\)//;
s/\\inst\{(.+?)\}/ \($1\)/g;
if (/^\s+\\and/) { print "1. "; next; }
s/^\{% (\w+) heading \(.+$/\n\n_\u$1._ /;
s/^\\abstract/## Abstract/;
s/^\\keywords\{/__Key words.__ /;

Many lines simply are no longer needed in Markdown and therefore dopped:

# Lines to drop, not relevant
next if (/(^\\maketitle|^%\s+|^%In general|^\\date|^\\begin\{figure|^\\end\{figure|\s+\\centering|\s+\\begin\{split\}|\s+\\end\{split\}|^\s*\\label|^\\end\{acknowledgements\}|^\\FloatBarrier|^\\bibliograph|^\\end\{algorithm\}|^\\begin\{appendix|^\\end\{appendix\}|^\\end\{document\})/);

s/\s+%\s+[^%].+$//;	# Drop LaTeX comments
s/\\fnmsep.+$//;	# drop e-mail

Display math is enclosed in double dollars:

print "\$\$\n" if (/(\\begin\{equation\}|\\begin\{align\})/);	# enclose with $$a #1

Images are replaced with the usual Markdown code ![]():

# images
s/\s+\\includegraphics.+res\/(\w+)\}/!\[Photo\]\(\*<\?=\$rbase\?>\*\/img\/parsec_res\/$1\.png)/;
s/\s+\\subcaptionbox\{(.+?)\}\{\%/\n__$1__\n/g;

Some LaTeX macros are not present in MathJax and therefore need to be replaced.

# MathJax doesn't know \nicefrac
s/\\nicefrac\{(.+?)\}\{(.+?)\}/\{$1\}\/\{$2\}/g;
s/\\coloneqq/:=/g;	# MathJax doesn't know \coloneqq + \argmin + \SI
s/\\argmin/\\mathop\{\\hbox\{arg min\}\}/g;
s/\\SI(|\[parse\-numbers=false\])\{(.+?)\}/$2/g;
s/\\SIrange\{(.+?)\}\{(.+?)\}\{(|\\)([^\\]+?)\}/$1 $4 to $2 $4/g;
s/\\nano\\meter/nm/g;
s/\{\\pc\}/pc/g;
s/\{\\kpc\}/kpc/g;
s/(kpc|pc)\$/\\\\,\\hbox\{$1\}\$/g;
s/\{\\cubic\\pc\}/\\\\,\\hbox\{pc\}^3/g;

What looks good in LaTeX does not necessarily look good in Markdown:

s/i\.e\.\\ /i.e., /g;

# Special cases
s/``([A-Za-z])/"$1/g;	# double backquotes in LaTeX have an entirely different meaning than in Markdown

More MathJax specialities:

# These are probably MathJax bugs, which we correct here
s/\$\\tilde\{Q\}_\{\\bar\{\\xi\}\}\$/\$\\tilde\{Q\}\\_\{\\bar\{\\xi\}\}\$/g;
s/\$\\mathcal\{D\}_/\$\\mathcal\{D\}\\_/g;
s/\$P\(d\|\\mathcal\{D\}_/\$P\(d\|\\mathcal\{D\}\\_/g;
s/\$\\mathrm\{sf\}_/\$\\mathrm\{sf\}\\_/g;

Various LaTeX text-macros:

s/\\url\{(.+?)\}/$1/g;	# Markdown automatically URL-ifies URLs, so we can dispense \url{}

# Thousands separator, see https://stackoverflow.com/questions/33442240/perl-printf-to-use-commas-as-thousands-separator
s/\\num\[group-separator=\{,\}\]\{(\d+)\}/scalar reverse(join(",",unpack("(A3)*", reverse int($1))))/eg;

# Code
s/\\lstinline\|(.+?)\|/`$1`/g;
s/\\texttt\{(.+?)\}/`$1`/g;
s/quality\\_flags\$<\$8/quality_flags<8/g;	# special case

# Special cases for preventing code blocks because of indentation
s/   (The angular resolution)/$1/;
s/   (The stated highest r)/$1/;

Section and subsection headers become ## and ### in Markdown:

# sections + subsections
if (/\\section\{(.+?)\}\s*$/) {
    my $s = $1;
    ++$sectionCnt; $subSectionCnt = 0;
    push @sections, "- [$sectionCnt. $s](#s$sectionCnt)";
    $_ = "\n## $sectionCnt. $s<a id=s$sectionCnt></a>\n";
} elsif (/\\subsection\{(.+?)\}\s*$/) {
    my $s = $1;
    ++$subSectionCnt;
    push @sections, "\t- [$sectionCnt.$subSectionCnt $s](#s${sectionCnt}_$subSectionCnt)";
    $_ = "\n### $sectionCnt.$subSectionCnt $s<a id=s${sectionCnt}_$subSectionCnt></a>\n";
}

For footnotes I used block quotes in Markdown.

if (/(\\footnotetext\{%|^\\begin\{acknowledgements\})/) { print "> "; next; }

I fought a little bit with citations and initially had something like:

# Citations
#s/\\citep(|\[.*?\]\[\])\{(\w+)\}/'('.(length($1)>4?substr($1,1,-3).' ':'').'['.join('], [',split(',',$2)).'][])'/eg;
# First approach, now obsolete through eval()-approach
#s/\\citep\{(\w+)\}/([$1][])/g;
#s/\\citep\{(\w+),(\w+)\}/([$1][], [$2][])/g;
#s/\\citep\{(\w+),(\w+),(\w+)\}/([$1][], [$2][], [$3][])/g;
#s/\\citep\{(\w+),(\w+),(\w+),(\w+)\}/([$1][], [$2][], [$3][], [$4][])/g;
#s/\\citep\{(\w+),(\w+),(\w+),(\w+),(\w+)\}/([$1][], [$2][], [$3][], [$4][], [$5][])/g;
#s/\\citep\{(\w+),(\w+),(\w+),(\w+),(\w+),(\w+)\}/([$1][], [$2][], [$3][], [$4][], [$5][], [$6][])/g;
#s/\\citep\{(\w+),(\w+),(\w+),(\w+),(\w+),(\w+),(\w+)\}/([$1][], [$2][], [$3][], [$4][], [$5][], [$6][], [$7][])/g;
#s/\\citep\{(\w+),(\w+),(\w+),(\w+),(\w+),(\w+),(\w+),(\w+)\}/([$1][], [$2][], [$3][], [$4][], [$5][], [$6][], [$7][], [$8][])/g;
#s/\\citep\{(\w+),(\w+),(\w+),(\w+),(\w+),(\w+),(\w+),(\w+),(\w+)\}/([$1][], [$2][], [$3][], [$4][], [$5][], [$6][], [$7][], [$8][], [$9][])/g;
#s/\\citep\{(\w+),(\w+),(\w+),(\w+),(\w+),(\w+),(\w+),(\w+),(\w+),(\w+)\}/([$1][], [$2][], [$3][], [$4][], [$5][], [$6][], [$7][], [$8][], [$9][], [$10][])/g;
#s/\\citet\{(\w+)\}/[$1][]/g;

Luckily this can be handled by eval in regex, i.e., watch out for the s///eg, the e is important:

s!\\citep\{([,\w]+)\}!'(['.join('][], [',split(/,/,$1)).'][])'!eg;	# cite-paranthesis without any prefix text
s!\\citep\[(.+?)\]\[\]\{(\w+)\}!'('.$1.' ['.join('][], [',split(/,/,$2)).'][])'!eg;	# citep with prefix text
s!\\(citet|citeauthor)\{([,\w]+)\}!'['.join('][], [',split(/,/,$2)).'][]'!eg;	# we handle citet+citeauthor the same

During development of this Perl script I used Beyond Compare quite intensively, to compare the original against the changed file.

5. blogbibtex script. The input to this script is the Bibtex file with all literature references. The Bibtex file looks something like this:

@book{Draine2011,
  author  = {{Draine}, Bruce T.},
  title   = {{Physics of the Interstellar and Intergalactic Medium}},
  year    = 2011,
  adsurl  = {https://ui.adsabs.harvard.edu/abs/2011piim.book.....D},
  adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@article{Popescu2002,
  author        = {{Popescu}, Cristina C. and {Tuffs}, Richard J.},
  title         = {{The percentage of stellar light re-radiated by dust in late-type Virgo Cluster galaxies}},
  journal       = {\mnras},
  keywords      = {galaxies: clusters: individual: Virgo Cluster, galaxies: fundamental parameters, galaxies: photometry, galaxies: spiral, galaxies: statistics, infrared: galaxies, Astrophysics},
  year          = 2002,
  month         = sep,
  volume        = {335},
  number        = {2},
  pages         = {L41-L44},
  doi           = {10.1046/j.1365-8711.2002.05881.x},
  archiveprefix = {arXiv},
  eprint        = {astro-ph/0208285},
  primaryclass  = {astro-ph},
  adsurl        = {https://ui.adsabs.harvard.edu/abs/2002MNRAS.335L..41P},
  adsnote       = {Provided by the SAO/NASA Astrophysics Data System}
}

The Perl script has some journal names preloaded:

#!/bin/perl -W
# Convert BibTeX to Markdown. Produce the following:
#    1. List of URL targets
#    2. Sorted list of literature entries

use strict;
my ($inArticle,$entry,$entryOrig,$type) = (0,"","");
my %H;	# hash of hash (each element in hash is a yet another hash)
my %Journals = (	# see http://cdsads.u-strasbg.fr/abs_doc/aas_macros.html
    '\aap'   => 'Astronomy & Astrophysics',
    '\aj'    => 'Astronomical Journal',
    '\apj'   => 'The Astrophysical Journal',
    '\apjl'  => 'Astrophysical Journal, Letters',
    '\apjs'  => 'Astrophysical Journal, Supplement',
    '\mnras' => 'Monthly Notices of the RAS',
    '\nat'   => 'Nature'
);

The actual loop populates the hash %H:

while (<>) {
    if (/^@(article|book|inproceedings|misc|software)\{(\w+),$/) {
        ($type,$entry,$entryOrig,$inArticle) = ($1,uc $2,$2,1);
        $H{$entry}{'entry'} = $entryOrig;
        $H{$entry}{'type'} = $type;
        #printf("\t\tentry = |%s|, type = |%s|\n",$entry,$type);
    } elsif ($inArticle) {
        if (/^}\s*$/) { $inArticle = 0; next; }
        if (/^\s+(\w+)\s*=\s*(.+)(|,)$/) {
            my ($key,$value) = ($1,$2);

            # LaTeX foreign language character handling
            $value =~ s/\{\\ss\}/ß/g;
            $value =~ s/\{\\"A\}/Ä/g;
            $value =~ s/\{\\"U\}/Ü/g;
            $value =~ s/\{\\"O\}/Ö/g;
            $value =~ s/\{\\"a\}/ä/g;
            $value =~ s/\{\\"u\}/ü/g;
            $value =~ s/\{\\"i\}/ï/g;
            $value =~ s/\{\\H\{o\}\}/ő/g;
            $value =~ s/\{\\"\\i\}/ï/g;
            $value =~ s/\{\\"o\}/ö/g;
            $value =~ s/\{\\'A\}/Á/g;	# accent aigu
            $value =~ s/\{\\'E\}/É/g;	# accent aigu
            $value =~ s/\{\\'O\}/Ó/g;	# accent aigu
            $value =~ s/\{\\'U\}/Ú/g;	# accent aigu
            $value =~ s/\{\\'a\}/á/g;	# accent aigu
            $value =~ s/\{\\'e\}/é/g;	# accent aigu
            $value =~ s/\{\\'o\}/ó/g;	# accent aigu
            $value =~ s/\{\\'u\}/ú/g;	# accent aigu
            $value =~ s/\{\\`a\}/à/g;	# accent grave
            $value =~ s/\{\\`e\}/è/g;	# accent grave
            $value =~ s/\{\\`u\}/ù/g;	# accent grave
            $value =~ s/\{\\^a\}/â/g;	# accent circonflexe
            $value =~ s/\{\\^e\}/ê/g;	# accent circonflexe
            $value =~ s/\{\\^i\}/î/g;	# accent circonflexe
            $value =~ s/\{\\^\\i\}/î/g;	# accent circonflexe
            $value =~ s/\{\\^o\}/ô/g;	# accent circonflexe
            $value =~ s/\{\\^u\}/û/g;	# accent circonflexe
            $value =~ s/\{\\~A\}/Ã/g;	# minuscule a
            $value =~ s/\{\\~a\}/ã/g;	# minuscule a
            $value =~ s/\{\\~O\}/Õ/g;	# minuscule o
            $value =~ s/\{\\~o\}/õ/g;	# minuscule o
            $value =~ s/\{\\~n\}/ñ/g;	# palatal n
            $value =~ s/\{\\v\{C\}/Č/g;	# grapheme C
            $value =~ s/\{\\v\{c\}/č/g;	# grapheme c
            $value =~ s/\{\\v\{S\}/Š/g;	# grapheme S
            $value =~ s/\{\\v\{s\}/š/g;	# grapheme s
            $value =~ s/\{\\v\{Z\}/Ž/g;	# grapheme Z
            $value =~ s/\{\\v\{z\}/ž/g;	# grapheme z
    
            $value =~ s/\{|\}|\~//g;	# drop {}~
            $value =~ s/,$//;	# drop last comma
            $H{$entry}{$key} = $value;
            #printf("\t\t\tentry = |%s|, key = |%s|\n", $entry, $key);
        }
    }
}

Once everything is loaded into the hash, the hash is printed out in formatted form.

print("\n");
for my $e (sort keys %H) {
    my $He = \%H{$e};
    my $url = 
    printf("[%s]: %s\n", $H{$e}{'entry'},
        exists($H{$e}{'doi'}) ? 'https://doi.org/'.$H{$e}{'doi'}
        : exists($H{$e}{'url'}) ? $H{$e}{'url'} : '#Literature');
}
print("\n");

for my $e (sort keys %H) {
    my ($He,$date,$journal) = (\$H{$e},"","");
    if (exists($$He->{'year'}) && exists($$He->{'month'}) && exists($$He->{'day'})) {
        $date = sprintf("%02d-%s-%d", $$He->{'year'}, $$He->{'month'}, $$He->{'day'});
    } elsif (exists($$He->{'year'}) && exists($$He->{'month'})) {
        my $m = $$He->{'month'};
        $date = "\u$m" . "-" . 	$$He->{'year'};
    } elsif (exists($$He->{'year'})) {
        $date = $$He->{'year'};
    }
    if (exists($$He->{'journal'})) {
        my $t = $$He->{'journal'};
        $journal = ", " . ((substr($t,0,1) eq '\\') ? $Journals{$t} : $t);
        $journal .= ", Vol. " . $$He->{'volume'} if (exists($$He->{'volume'}));
        $journal .= ", Nr. " . $$He->{'number'} if (exists($$He->{'number'}));
        $journal .= ", pp. " . $$He->{'pages'} if (exists($$He->{'pages'}));
    }

    printf("1. \\[%s\\] %s: _%s_, %s%s%s\n", $H{$e}{'entry'}, $H{$e}{'author'},
        defined($H{$e}{'title'}) ? $H{$e}{'title'} : $H{$e}{'howpublished'},
        $date, $journal,
        exists($H{$e}{'doi'}) ? ', https://doi.org/'.$H{$e}{'doi'}
        : exists($H{$e}{'url'}) ? ', ' . $H{$e}{'url'} : ''
    );
}

The output of this blogbibtex script is then appended to the output of the previous script blogparsec.

6. Open issues. I had already worked for two days on these two Perl scripts and wanted to finish it. Therefore the following topics are not adressed but can be solved quite easily.

  1. There are still some stray curly braces, which should be removed.
  2. Back and forward references, i.e., all these still visible \Cref tags should be converted using link references in Markdown.
  3. LaTeX table were converted manually, should be fully automatic.
  4. Converting the \begin{algorithm} and \end{algorithm} probably is a lot trickier, as it needs extra CSS to work properly.
]]>
https://eklausmeier.goip.de/blog/2023/09-28-performance-comparison-of-ristorante-panorama-website-wordpress-vs-simplified-saaze https://eklausmeier.goip.de/blog/2023/09-28-performance-comparison-of-ristorante-panorama-website-wordpress-vs-simplified-saaze Performance Comparison of Ristorante Panorama Website: WordPress vs. Simplified Saaze Thu, 28 Sep 2023 07:00:00 +0200 In the previous post Example Theme for Simplified Saaze: Panorama I demonstrated the transition from a website using WordPress to Simplified Saaze. This very blog also uses Simplified Saaze. This post shows how much better performance-wise this transitions was. The comparison is therefore between:

  1. Original: WordPress version
  2. Modified: Simplified Saaze version of Ristorante Panorama

The original website is hosted by Strato. It uses WordPress and Elementor.

1. Comparison. For the comparison I use the website tools.pingdom.com which provides various metrics to evaluate the performance of a website:

  1. Page size
  2. Number of requests
  3. Load time
  4. Concrete tips to improve performance
  5. Waterfall diagram of requests
  6. Breakdown of content types

All tests in Pingdom were conducted for Europe/Frankfurt.

The results are thus:

Original (WordPress) Modified (Simplified Saaze)

The results for the original website are indeed very bad on every dimension: page size, load time, number of requests. In comparison to the modified version using Simplified Saaze the ratio is roughly:

  1. Page size is more than 10:1
  2. Load time is almost 8:1
  3. Number of requests is 8:1

So Simplified Saaze is better in all dimensions by a large factor. This is particular striking as the Simplified Saaze version is entirely self-hosted, i.e., upload to the internet is 50 MBit/s!

The recommendations for the original website are therefore not overly surprising:

The missing compression is clearly an oversight on the web-server part.

The breakdown of the content type for the original website is:

2. Modified website. The website powered by Simplified Saaze is still very image heavy, but there is no JavaScript, there are no fonts, or megabytes of CSS. The breakdown of the modified site is as below.

Actual loading of the modified site will roughly follow below waterfall diagram. This waterfall diagram shows that the images can all be loaded in parallel, while the actual HTML is one of the prime factors for the overall request time. Also, all images are way smaller, as they all have been converted to WebP format.

3. Competitor. There is another Italian restaurant in town, Ristorante Bella Vista. Pingdom values are as below.

Content type breakdown is:

The good request times can be attributed to:

  1. Jpeg images have been scaled down to make them small
  2. Less than 100 KB of JavaScript
  3. No webfonts

The Bella Vista website is hosted by Hetzner. It uses Weblication CMS, which is also based on PHP like WordPress.

]]>
https://eklausmeier.goip.de/blog/2023/09-27-example-theme-for-simplified-saaze-panorama https://eklausmeier.goip.de/blog/2023/09-27-example-theme-for-simplified-saaze-panorama Example Theme for Simplified Saaze: Panorama Wed, 27 Sep 2023 14:30:00 +0200 1. Features. Here is another theme called Panorama for Simplified Saaze. The example content is from Ristorante Panorama. This theme has below properties:

  1. It is geared towards restaurants with menus
  2. Responsive with media-breaks for 1-column, 2-column, 3-column, and printer output
  3. RSS and sitemap
  4. Showcase for post-processing, if needed
  5. Hero image
  6. Background SVG image
  7. Animated images and galleries
  8. Lightweight and easy to use

Its source code is in GitHub: saaze-panorama.

Here is a screenshot: Photo

The original website uses WordPress, Elementor, and Google Site Kit. The original website has a number of major shortcomings:

  1. Terribly slow
  2. Loading web-fonts, which are not used
  3. Loading images, which are not used
  4. Duplicated text on a single webpage
  5. RSS feed empty
  6. Google indexing disabled

In addition there are various minor glitches:

  1. Misspellings
  2. Navigation mishaps: redirecting to same page
  3. Color contrast sometimes bad: green text on black background
  4. favicon icon too small to be human-readable

This theme is the eighth example theme. We had themes migrated from WordPress, from Hugo, from Jekyll. This time again a migration from WordPress with Elementor.

2. Creating restaurant menus with post-processing. A restaurant obviously wants to show its menu. This is done as follows:

## 2. Kalte und warme Vorspeisen<a id=vorpeisen></a>

- Antipasti dela Casa ...... klein €9,50 - groß €12,50
    - mit gegrilltem Gemüse und Fisch
- Shrimps Cocktail `1,b,c,d,n` ...... € 9,50
    - mit Shrimps, Ananas
- Shrimps mit Olivenöl, Cocktailtomaten `1,b,c,n` ...... €12,50
    - mit Knoblauch & Schalotten
- Gebackener Schafskäse `1,b,c,d,n` ...... €10,50
    - mit Tomaten, Oliven und Peperoni

So data entry closely mirrors the output, which looks like this: Photo

The CSS for this "dot-trick" can be found here: Dot Leaders by Bert Bos.

The frontmatter of the Markdown looks like this, indicating that it wants its output to be processed further:

---
title: "Speisekarte"
date: "2023-07-11 21:00:00"
excerpt: "Italienisch-mediterrane Köstlichkeiten, ausgewählte deutsche Spezialitäten, Steaks und Fisch."
heroimg: "Schweinefleisch.webp"
postproc: true
---

Above frontmatter also shows how the hero-image is defined.

The actual post-processing is done in the template-file entry.php:

<?php require SAAZE_PATH . "/templates/top-layout.php"; ?>

    <main>
    <article class=aentry>
<?php
    if (!function_exists('postproc')) {
        // Post-processing of MD4C processed Markdown, not really clean,
        //because probably specific to MD4C, but does the job
        function postproc(string $s) : string {
            //return $s;
            $s = str_replace(
                array(PHP_EOL.'<ul>',
                    PHP_EOL.'<li><p>',
                    '</p>'.PHP_EOL.'</li>',
                    '<ul>'.PHP_EOL,
                    'class=leaders>'.PHP_EOL.'<li><code>'),
                array(PHP_EOL.'<ul class=leaders>',	// add class=leaders to ul
                    PHP_EOL.'<li>',	// strip <p> after <li>
                    '</li>',	// strip </p> before </li>
                    '<ul class=noleaders>'.PHP_EOL,	// 2nd ul must not have leaders but noleaders
                    'class=noleaders>'.PHP_EOL.'<li><code>'),	// Allergene Sonderfall
                $s);
            // replace ABC ...... UVW with ABC+UVW each enclosed in span's
            // catchword is six dots
            return preg_replace(
                '/(' . PHP_EOL . '<li>)(.+)\s+\.\.\.\.\.\.\s+(.+)(<ul|<\/li>)/',
                '$1<span>$2</span><span>$3</span>$4',
                $s
            );
        }
    }
    echo '<h1>' . $entry['title'] . "</h1>\n";
    if (isset($entry['heroimg']))
        printf("<p><img class=heroimg src=\"%s/img/%s\" alt=\"Hero image\"></p>\n",$rbase,$entry['heroimg']);
    $s = ($entry['postproc'] ?? false) ? postproc($entry['content']) : $entry['content'];
    echo $s;


?>
    </article>
    </main>

The post-processing effectively just search-and-replaces certain strings, in our case six dots.

If this theme also wants to mix PHP into Markdown then replace above echo $s with below three PHP lines.

    $s = str_replace('*%3c?','<?',$entry['content']);
    $s = str_replace('?%3e*','?>',$s);
    require 'data:text/plain;base64,'.base64_encode($s);

If you omit above post-processing PHP function postproc() from above template, the template would be pretty simple.

3. Installation. The theme including Simplified Saaze is installed by using composer:

composer create-project eklausme/saaze-panorama

This installs below directory tree:

saaze-panorama
|-- LICENSE
|-- README.md
|-- composer.json
|-- composer.lock
|-- content
|   |-- auxil
|   |   |-- datenschutzerklaerung.md
|   |   `-- impressum.md
|   |-- auxil.yml
|   |-- blog
|   |   |-- aktuell.md
|   |   |-- biergarten.md
|   |   |-- catering.md
|   |   |-- feiern.md
|   |   |-- mittagstisch.md
|   |   |-- pfifferlinge.md
|   |   |-- ristorante.md
|   |   `-- speisekarte.md
|   `-- blog.yml
|-- public
|   |-- img
|   |   |-- Aussenbereich1.jpg
|   |   |-- Aussenbereich1.webp
|   |   |-- Aussenbereich2.jpg
|   |   |-- . . .
|   |   `-- green-orange-and-yellow-pasta-165844-2000x1200-1.webp
|   `-- index.php
|-- saaze
|-- templates
|   |-- bottom-layout.php
|   |-- entry.php
|   |-- error.php
|   |-- head.php
|   |-- index.php
|   |-- overview.php
|   |-- rss.php
|   |-- sitemap.php
|   `-- top-layout.php
`-- vendor
    |-- autoload.php
    |-- composer
    |   |-- ClassLoader.php
    |   |-- InstalledVersions.php
    |   |-- LICENSE
    |   |-- autoload_classmap.php
    |   |-- autoload_namespaces.php
    |   |-- autoload_psr4.php
    |   |-- autoload_real.php
    |   |-- autoload_static.php
    |   |-- installed.json
    |   |-- installed.php
    |   `-- platform_check.php
    `-- eklausme
        `-- saaze
            |-- BuildCommand.php
            |-- Collection.php
            |-- CollectionArray.php
            |-- Config.php
            |-- Entry.php
            |-- LICENSE
            |-- MarkdownContentParser.php
            |-- README.md
            |-- Saaze.php
            |-- SaazeCli.php
            |-- TemplateManager.php
            |-- composer.json
            |-- php_md4c_toHtml.c
            `-- saaze

11 directories, 137 files

Here are two articles if you want to install Simplified Saaze on Windows:

  1. Installing Simplified Saaze on Windows 10
  2. Installing Simplified Saaze on Windows 10 #2

4. Building and deploying. Change to the directory saaze-panorama. The following commmand builds a static site.

$ time php saaze -morb /tmp/build
Building static site in /tmp/build...
        execute(): filePath=/home/klm/php/saaze-panorama/content/auxil.yml, nentries=2, totalPages=1, entries_per_page=20
        execute(): filePath=/home/klm/php/saaze-panorama/content/blog.yml, nentries=8, totalPages=1, entries_per_page=20
Finished creating 2 collections, 2 with index, and 10 entries (0.02 secs / 1.68MB)
#collections=2, YamlParser=0.0002/12-2, md2html=0.0004, MathParser=0.0003/10, renderEntry=10, content=10/0, excerpt=0/0
        real 0.04s
        user 0.01s
        sys 0
        swapped 0
        total space 0

As can be seen, build time is way below a tenth of a second on a Ryzen 7 5700G. In above scenario we use options -m for generating a sitemap, -o for generating an overview page, -r for generating RSS. Option -b is used to build in /tmp, which on Arch Linux is a RAM disk. Options m, o, and r are entirely optional. I.e., below command would do just as well.

php saaze

The resulting HTML files need to be uploaded to your web-server. Below are the steps to upload to a local web-server assuming you built into /tmp/build. A local web-server is a web-server running on the same machine where you generated the HTML files.

[ -d $DOCROOT ] && rm -rf $DOCROOT
[ -d /tmp/build ] || errorExit "No build directory in /tmp"
mv /tmp/build $DOCROOT

cd $DOCROOT
ln -s $SAAZEROOT/public/img

For local development of your website, you use:

php -S 0:8000 -t public/

This starts a web-server and you can immediately see any changes you make. Above command shows Simplified Saaze in dynamic mode. You can also use this dynamic mode with NGINX by using something like below:

server {
    rewrite "^/(aux|blog)($|/.*)"  "...your-directory.../index.php?/$1$2" last;
}

The dynamic mode of Simplified Saaze has the advantage that you don't need to build any static HTML files. All HTML files are generated on the fly. The disadvantage is that every request will rebuild the requested HTML page, unless you use intensive caching in your web-server.

5. CSS and favicon. The panorama-theme uses a SVG based background image. This was inspired by Matt Visiwig's page on SVG backgrounds. In our case we used "Subtle Prism". We already mentioned the dot-leader CSS for aligned lines of dots.

Generating the favicon was done using the web-page favicon-generator using the two letters 'R' and 'P' with circled background. The favicon is directly embedded into the head.php template. This helps to reduce to number of requests required for the browser to show the web-page.

<link href="data:image/png;base64,iVBORw....ABJRU5ErkJggg==" rel="icon" type="image/png">

I had written on this here: Accelerating Page Load Times by Reducing Requests, Part #2.

The three-column output is realized using CSS grids.

@media screen and (min-width:99rem) {	/* 3 column output */
    .aentry, header, aside, footer { width:var(--klmWidth) }
    .aindex { margin-left:0rem; width:20rem }
    .allcontent { max-width:var(--klmWidth); margin:auto; padding:0rem }
    .agrid-container {
        display:grid;
        justify-content:center;
        column-gap:2rem;
        grid-template-columns: auto auto auto;
        grid-template-areas: 'article article article';
    }
    /* https://www.w3docs.com/snippets/css/how-to-vertically-align-text-next-to-an-image.html */
    .imgcontainer { display:flex; align-items:center }
    .textimg { padding-left:2.5rem }
}

Printing to an old-fashioned printer is handled by a special media break:

@media print {
    h2 { page-break-before: always }
    h1, h2, h3, h4, h5, h6, ul, li, p { color:black }
}

Most notably this is for printing out the menu card. This is considered to be of some importance as you can now have a single source of truth: the menu on the web-page, and the printed menu from the web-page.

6. Home page / index. The index-page or landing page of this theme is somewhat special as it shows all blog posts, but singles out the newest one. This newest post is interesting as it might contain offers of the day, special announcements on opening hours or holidays, etc. Photo

<?php
    if (count($pagination['entries']) > 0) {
        $entry = array_shift($pagination['entries']);	// 1st element, i.e., newest
        echo "<aside>\n" . $entry['content'] . "</aside>\n";
    }
?>

All other posts are handled as usual:

<?php foreach ($pagination['entries'] as $entry) { ?>
    <article class=aindex>
    <h2><a href="<?= $rbase . $entry['url'] ?>"><?= $entry['title'] ?? 'Unknown title' ?></a></h2>
<?php if (isset($entry['heroimg'])) { ?>
    <div class=ixImgContainer><a href="<?=$rbase.$entry['url']?>"><img class=ixImgZoomIn width=300 src="<?=$rbase?>/img/<?=$entry['heroimg']?>" alt=HeroImg></a></div>
<?php } ?>
    <p><?= $entry['excerpt'] ?? '---' ?></p>
    </article>
<?php } ?>
]]>
https://eklausmeier.goip.de/blog/2023/09-23-malcolm-gladwell-meritocracies-do-not-work https://eklausmeier.goip.de/blog/2023/09-23-malcolm-gladwell-meritocracies-do-not-work Malcolm Gladwell: Meritocracies don't work Sat, 23 Sep 2023 21:45:00 +0200 Malcolm Gladwell was invited to Google Zeitgeist again. He gave a talk on meritocracies and their failures. This is somewhat a follow-up on his earlier talk given on Google Zeitgeist. I had written on this prior talk here: Malcolm Gladwell: Don't go to Harvard, go to the Lousy Schools!.

The talk is here:

Meritocracy, a ruling system based on merit:

the notion of a political system in which economic goods or political power are vested in individual people based on ability and talent, rather than wealth or social class. Advancement in such a system is based on performance, as measured through examination or demonstrated achievement.

At first sight this looks like that it will favor good outcome. Malcolm Gladwell analyzes a number of pitfalls.

Below is the transcript of the talk directly from YouTube:

It's a real pleasure to be invited back to Google's Zeitgeist. I think the last time I spoke at this was many, many years ago in Phoenix, and if memory serves, my talk was a critical examination of my decision to agree to talk at Google Zeitgeist. Incredibly, I got invited back, and I so I thought as an encore what I would do is do a critical examination of why all of you were invited to Google Zeitgeist. Now, there is a standard answer to that, which is that this is a gathering of the best and the brightest and all of you have reason to believe that you are the best and the brightest. But my question is: How do you know you're the best and the brightest? And what I want to suggest this morning is that there is a great deal of more uncertainty over that question than you may care to admit, and that paradoxically, this is a very good thing. So, I want to focus in the brief time that I have on the role of gatekeepers, because meritocracies of the sort that we've erected in our world are run by gatekeepers, and I would like to advance a series of propositions to suggest that gatekeepers are really, really bad at what they do. So, there is going to be four of these propositions.

Proposition 1. Gatekeepers very often do not understand the meritocracy that they are supposed to be keeping.

Proposition 1 is that gatekeepers very often do not understand the meritocracy that they're supposed to be keeping the watching the gates for. So, tons of examples, but the one I will focus on is the NIH, National Institute of Health. This is one of the most consequential meritocracies in the world, probably. NIH has a budget of 40 billion dollars a year. They get 80,000 grant applications a year, which represents an extraordinary percentage of the most crucial research we do in the world, and they put together groups of experts who grade each one of those grant applications on a scale of 10 to 90 where 10 is fantastic and 90 is terrible, right. So, this is a classic meritocracy guarded by a group of expert gatekeepers.

So, a couple years ago the Guy, the Deputy Director of extra research at NIH, the Guy running this process decades to try and verify how good the process is, right. So, when you do a score on a grant application, you're making a prediction of how good you think that research is going to turn out to be. So, his question was, well, how good are these predictions? He does a really simple analysis where medicine, the way we judge, the quality of research is how many citations are made to that research fund when finished. He says let's simply correlate the grant score on an application with a number of citations it gets once the research is finally finished. So, what does he discover? He discovers that the correlation between your score and how good your research ends up being is modest to nonexistent, right. Now, we're talking about one of the most crucial meritocracies in modern society. We're talking about $40 billion of intellectual activity, and the guy running the whole show takes a look and discovers the experts who are manning the gates to this particular meritocracy don't know what they're doing. So, why doesn't gate keeping work in this example? Well, one is maybe it's impossible to predict who is a good researcher and who isn't. That is impossible. Maybe it is the groups of experts by virtue of being experts belong to a particular generation of medical research and are hopelessly out of touch with what the next generation of research is supposed to be all about. It doesn't really matter. The point is that this is a meritocracy that is not a meritocracy, right. My favorite response to this guy, Dr. Lower's paper, a bunch of microbiologist published this paper where they said the only rational thing now is to tell all the grant reviewers to go home and shutdown that entire cumbersome process of trying to evaluate all these 80,000 grant applications, just have a big round cylindrical container, but all the applications in the container, and pick them out at random and that should be how we govern the grant process in this country. That strikes me as a system that makes a great deal of sense. Okay.

Proposition No. 2. Meritocracies don't work sometimes because they are run for the benefit of gatekeepers.

Proposition No. 2. Meritocracies don't work sometimes because they are run for the benefit of gatekeepers. Again, any examples.

The LSAT. I got so obsessed with the LSAT a couple years ago I took it. I challenged my assistant to an LSAT contest. So, we all mow about the LSAT. It's six sections, you know, reading, problem solving, logic problems, I've forgotten the others, writing. You get 35 minutes for each section, and your score determines whether you get into an elite law school, and whether you get into an elite law school determines whether you get an elite job once you graduate and whether you get an elite job is a job on the Supreme Court and an invitation to Zeitgeist. You make a distinction between power tests and speed tests. So, a speed test is where I give you a whole lot of relatively easy questions, and I'm interested to see how many you can answer in a given amount of time, right. So, video games are really very often speed tests, right. We play for constraint and see how well you do under that constraint. Power test is where I give you really hard questions, and all I'm really interested in is how many of those questions you can get right. So, scrabble tends to be, is really a power game. Untimed chess is the power game. So, what is the LSAT? Well, the LSAT is a series of very, very hard questions, but if we require that the test taker complete them in a limited period of time. And the time constraint is so strict that it's deliberately strict because we want to make it so hard. We want to make it hard enough that the average test taker cannot answer all the questions in the allotted time. So, what we have here is what a psycho matrician would call a speeded power test. We're collecting power data with a speed constraint. Here is the question: Why do we collect it with a speed constraint? Why is there a 35 minute limit on the six sections of the LSAT? So, take a look at this. This is a† first slide. Here we go. We have two test takers here. We have tortoise and hare. Hare, we all know hare is super speedy, very confident. He answers every one of the 101 questions on the LSAT in the time allotted. He gets 82 of them correct for an accuracy rate of 81.2, and he has an LSTAT score of 165 which puts him in the 94th-percentile, gets a job at Annie law firm, works 80 hours a week, he never sees his kids, his marriage falls apart.

Tortoise, by contrast, allow me to say tortoise is a woman, for no particular woman. But she is super analytic. She doesn't do things quickly. She, whenever she has a hard question, she goes over it 17 times. There is no way tortoise can finish all of the answers to all the questions in the time allotted, so she only gets to 80 questions of which she answers 78 correctly. She has an accuracy rate of 97.5, but she makes so many guesses that she gets penalized. She ends up with an LSAT score of 165, which is the 94th-percentile. She gets a job in Annie law firm, works 80 ours a week, never sees her children, her marriage falls apart and she quits law.

Okay. The LSAT will tell you that these two people hare and tortoise are identical. They both got a score of 165, but who would you rather have as your lawyer? Would you like to call up hare and say have you gotten to my contract yet and hare tell you yeah, I looked at it over lunch, it's fine, right. No.

You want tortoise as your lawyer. You want the person who is an el and doesn't skip ahead, right. But the purpose† the result of creating a speeded power test for the LSAT, the result of having that time constraint is to make hare look better than he is and tortoise look worse than she is actually is, right. Why would we do this in a profession that is based on tortoise thinking? I was so baffled by this I went to see the organization that runs the LSAT, but fancy office build anything New Jersey, huge conference room. They all gathered, and I just said to them, can you explain to me why you have a time limit on the LSAT? Makes no sense. Why not just let them spend all day doing it, right. Ask really hard questions. That is what the law is, hard questions, we take a long time. They charge by the hour for goodness sake. There is no institutional reason why you would want people to move quickly, right. And I had my taperecorder all ready, because I was expecting them to give this long-nuanced answer and their answer was, nah, just easier. How you going to rent the hall for the whole day? All right.

Proposition 3. Meritocratic systems often do not recognize that being real good is not an individual effort, but a team accomplishment.

Proposition 3. This is a crucial one. Particularly for the kind of intellectual work that we do in the modern economy. This is about surgeons. Now, we're all familiar with the observation that the more operations a surgeon does, the more procedures they do, the better they get. There is a learning curve with surgery. That is why we tend to have rare surgeries clustered at major teaching hospitals so we can keep the volume of the surgeon up really high, right. You don't go and get some kind of very particular brain surgery at some rural hospital, you go to the major medical center for this very reason. So, this is a chart demonstrating this. Nor wood operations are very difficult pediatric card dean surgery, and you can see the learning curve with your† if you are under like 150 cases a year, your mortality rate is really high. It's terrible. But once you get to about 400 a year, the mortality rate comes down dramatically by, you know, it's a quarter of what it would be otherwise. This is a pattern that we see throughout all of surgery, and the people who are on this end, right, are the ones who get rewarded, theyíre the ones who make the most money, the ones that have the fan east title, those are the ones that are the winners of the bureaucratic game that is academic medicine. Okay.

But there is a complication with this, and that is what happens to the people, the surgeons who do their Norwood operations at a different hospital? So, lots of surgeons do this, right. You have privileges at more than one hospital. Maybe you do 90% of your procedures at one place, but then you go down the treat or across town or the weekends or whatever and do some at another place. And the answer is that when you leave your regular hospital and moonlight somewhere else, you move from being at this end of the curve and you go all the way back to the other end of the curve. This is a result beautifully demonstrated in this paper. I am going to read to you the conclusion. "Higher volume in a prior period for a given surgeon at a particular hospital is correlated with significantly lower risk adjusted mortality for that surgeon hospital pair." That is what they're talking about. That volume, however, does not significantly improve the surgeon's performance at other hospitals. What does that mean? Well, what that means is that card deat surgery, or any kind of complex surgery is a team activity, right. So, when you are with your team at your regular hospital, you all get better together, but then when you leave on the weekend to moonlight somewhere else, you leave your team behind, and without your team, you're hopeless, right. Now, does the meritocratic system recognize that being a real good surgeon is not an individual accomplishment, but a team accomplishment? No, it doesn't, right. The whole meta contract particular system is based on the assumption that what we're observing here is the greatness of this particular individual surgeon. Now, I would suggest to you that is a pretty big problem, particularly if you are someone who picks your elite surgeon and just happens to be seeing that surgeon at the hospital they're moonlighting at and not their regular place. And I think that this applies to an extraordinary number of complex allegedly marrow contract particular systems. I mean, think about me up here right now. How much of this talk is me? Do you know whether I wrote or whether the team wrote it, right? You don't know. You have no idea how good I am based on this particular talk that I'm giving you without knowing the actual process that I use to come up with these observations. Okay.

Proposition 4. Meritocracies are bad because gatekeepers don't fix them.

Proposition No. 4. Meritocracies are bad because gatekeepers don't fix them. Once you realize there is Anna accumulating body of knowledge that suggests we're not very good at managing meritocracies, then you would assume then that there should be an ongoing process by which we try and improve the quality of the gate keeping function, and it turns out that there isn't. So, again, a million examples, but this is one I've been obsessed with for a while. I wrote about it in my book Outliers in 2008. This is the roster of the 2007 Medicine Hat Tigers. This is the actual chart I used in my book Outliers, and this is a major junior hockey team, so this is one rung below the NHL. The point of this, those who read Outliers will know this, the point of this chart is this is a group of elite hockey players in a country that takes hockey very seriously, and what is most striking about this group are how many of them are born in the first four months of the year. January, January, March, April, September, October. April, and January, January, August, March, May, January, right. Now, this is a very, very well known phenomenon, it's called a relative age effect, and it's a function of the way in which we select† the way in which we structure the particular meritocracy that is elite youth sports. We, in Canada, they're crazy about hockey, so they start forming Al star teams at the age of nine, and at the age of nine, the kids who look like the best hockey players are the ones who are relatively the oldest. If you are born in January, you're going to look better than a kid born in December. So, we take that kid out and put them on an all-star team and give them way more practice time, way better coaches, way more access to good competition, way more encouragement and lo and behold ten years later they are the best. An arbitrary advantage has been elevated to a real advantage. You can see this everywhere. It's true in soccer, basketball, swimming. Any competitive sport that looks to develop and identify talent at an early level, early age, has the problem of creating these relative age effect arbitrary damages. For example, look at this. Schools. This is a study of gifted and talented programs in England, and they have broken down the composition of gifted and talented programs by birth cohort. In Inc. grand is September 1st. If you are in a relatively cohort in your class, your chance of being in a gifted science and math is roughly half of those who are born in the relatively oldest cohort. Basically, if your kid is among the youngest in your class, kiss good bye getting into a gifted and talented program. And of course, we use those to decide who gets into quality schools and we use quality schools to determine who gets advances, on and so. It's the same old system. This is not a meritocracy, something pretending to be a meritocracy. I wrote my book in 2008, and as a result of the long stuff about the effect of my book, there was a public attention to this particular relative age effect. I thought when I was coming here to talk about meritocracies, what I would do was revisit the hockey example and give you† show you about how this particular Canadian institution, Canadian is important to me, has learned its lesson and fixed its ways and no longer pursues a policy that has the misfortune of leading half the talent on the table. So, I decided I would look at the 2022 Canadian junior hockey roster, and let's just go through the birth months, shall we. September, November, June, March, January, January, April, September, January, August, September, July, October, January, February, January, August, April, October, January, March, January, March, February, January. They have learned nothing, right.

They haven't done a single thing to fix the problem. 15 years ago, it was brought to their attention that a country that was more passionate about hockey than almost anything else had created this system that was arbitrarily leaving half the talent on the table, right. This is† no one could possibly be more powerfully motivated to fix this system than Canadians, right. Hockey is the national everything. Have they fixed it? No, they haven't. By the way, has anyone fixed the system? No. Think about your child's elementary school. If you in first and Second Grade, do they divide the kids up and put the January to March kids in one class and the April to June kids in another and the July to September kids in another? No, they don't do that. Right. Even though we've had years and years of evidence that it's completely unfair to ask a January kid to compete with a December kid. When your child does† takes standardize tests, does the child take† do the kids born in December take the standardize tests on the same day as the kids born many January? Yes, they do. Does that make any sense whatsoever? No, it doesn't, right. For some reason we are powerfully incurious about the problems that we created with our meritocracies. We think we know a good research proposal from a bad one, and we don't. We think we know that we think we're selecting the right people for law school, and we aren't. We, you know, we think we know that an individual is responsible for their surgical success, and they aren't. And when we're presented with evidence of the falsity of our systems, what do we do? We do nothing. Now, I said at the beginning that this observation about our failed meritocracies is a very good thing. How can it be? If we fixed meritocracies, then most of us wouldn't be here, right.

But think about it. If we fix the system, the people who would replace us at a conference like this would be so much smarter than we are. This conference would have been so much more fun. Google would make so much more money, and I wouldn't be here. Someone far more gifted than I would be giving this talk, and it would have been infinitely more interesting. Thank you.

Also see The rise and fall of peer review and the followup The dance of the naked emperors by Adam Mastroianni.

]]>
https://eklausmeier.goip.de/blog/2023/08-30-performance-comparison-gzip-vs-brotli https://eklausmeier.goip.de/blog/2023/08-30-performance-comparison-gzip-vs-brotli Performance Comparison gzip vs Brotli Wed, 30 Aug 2023 16:25:00 +0200 The NGINX web-server offers gzip, deflate, and Brotli compression. My current nginx.conf file uses

brotli_comp_level 10;

It looks that indeed the default Brotli compression level 6 is a sweet spot for Brotli.

1. Measurement. I used below software versions:

  1. Arch Linux kernel 6.4.12-arch1-1
  2. Brotli 1.0.9-12
  3. gzip 1.12-3

Machine is using an AMD Ryzen 7 5700G CPU with 64GB DDR4-3600 RAM.

All files were stored in /tmp, i.e., there were in a RAM disk. All compressed files were also written to /tmp. In total there were 544 HTML files, with roughly 19 MB. The individual HTML files, of course, were smaller, otherwise I could not be member of the 512KB club.

Testing Brotli compression:

/tmp/build: time brotli -kf -q 6 `find . -name \*.html`
        real 0.27s
        user 0.23s
        sys 0
        swapped 0
        total space 0

Testing gzip compression:

/tmp/build: time gzip -kf -9 `find . -name \*.html`
        real 0.34s
        user 0.32s
        sys 0
        swapped 0
        total space 0

Checking total file size:

wc `find . -name \*.br` | tail -3

2. Results. Real- and user-times are given in seconds.

Brotli size real user gzip size real user
no compression 18,692,104
-q0 9,125,121 0.06 0.04
-q1 8,910,146 0.07 0.05 -1 8,983,674 0.20 0.18
-q2 8,657,583 0.11 0.09 -2 8,912,562 0.21 0.19
-q3 8,591,398 0.15 0.12 -3 8,862,408 0.22 0.20
-q4 8,205,937 0.21 0.19 -4 8,643,793 0.25 0.24
-q5 8,003,215 0.26 0.24 -5 8,576,144 0.28 0.27
-q6 7,998,547 0.27 0.23 -6 8,555,589 0.30 0.28
-q7 7,992,840 0.29 0.26 -7 8,549,382 0.31 0.30
-q8 7,990,726 0.30 0.28 -8 8,544,135 0.34 0.30
-q9 7,961,062 0.43 0.37 -9 8,543,902 0.34 0.32
-q10 7,510,277 5.59 5.55
-q11 7,427,506 14.16 14.08

3. Discussion. Even very low compression levels of Brotli lead to a significant reduction in compressed file size, way better than gzip. Starting at compression level 9 Brotli becomes slower than gzip but still compresses way better than gzip. My decision to use compression level 10 was motivated by the fact that many readers of this blog are not from Germany, .e.g., there are either from the US or from India. In this case I hope I can trade CPU time for smaller transferred data across the wire.

]]>
https://eklausmeier.goip.de/blog/2023/08-29-from-hiawatha-to-nginx https://eklausmeier.goip.de/blog/2023/08-29-from-hiawatha-to-nginx From Hiawatha to NGINX Tue, 29 Aug 2023 11:45:00 +0200 Since mid of August I switched from Hiawatha web-server to NGINX web-server. I initially intended to use OpenLiteSpeed web-server. See Installing OpenLiteSpeed on Arch Linux, but installation and configuration of OpenLiteSpeed turned out to be complicated. I had previously experimented and used Lighttpd.

1. Motivation. The author and maintainer of Hiawatha, Hugo Leisink, on 18-Feb-2019 stated on his weblog:

Many times, I wondered whether I should keep going on with the project or not, but somehow I always found a reason to continue. But not this time. Recently, a serious issue was found in the Hiawatha webserver and the fact that I didn't care much, made me realize that it's time to stop.

Clearly, Hiawatha will never support HTTP/2 or HTTP/3 ... new features will be based on what I need, not on what is needed for a webserver in general.

Over the years he did not change his opinion on that. So it clearly was time to find a web-server which is fully maintained and offers below functionality:

  1. Brotli compression
  2. HTTP/3 and QUIC
  3. URL rewriting
  4. Built-in caching like Varnish

In Set-Up Hiawatha Web-Server I compared the size of various web-servers.

web-server #header #C source LOC
Hiawatha 11.3 155 136 206,878
NGINX 1.25 136 259 229,625

2. NGINX installation. Installing NGINX is pretty simple as it is contained in the Extra-repository of Arch Linux. For installing the Brotli extension you need to install the NGINX source code, then download the Brotli module, and compile the module. Below comment from the original GitHub repository is important:

You will need to use exactly the same ./configure arguments as your Nginx configuration and append --with-compat --add-dynamic-module=/path/to/ngx_brotli to the end, otherwise you will get a "module is not binary compatible" error on startup. You can run nginx -V to get the configuration arguments for your Nginx installation. Then

$ cd nginx-1.25
$ ./configure --with-compat --add-dynamic-module=/path/to/ngx_brotli
$ make modules

A concrete example for installing brotli-1.0rc. Switch to root user and go to directory /usr/src/nginx. In below configure the majority of the command line is the from nginx -V.

./configure --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/bin/nginx --pid-path=/run/nginx.pid --lock-path=/run/lock/nginx.lock --user=http --group=http --http-log-path=/var/log/nginx/access.log --error-log-path=stderr --http-client-body-temp-path=/var/lib/nginx/client-body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-cc-opt='-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/nginx-mainline/src=/usr/src/debug/nginx-mainline -flto=auto' --with-ld-opt='-Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now -flto=auto' --with-compat --with-debug --with-file-aio --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_degradation_module --with-http_flv_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_v3_module --with-mail --with-mail_ssl_module --with-pcre-jit --with-stream --with-stream_geoip_module --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-threads --with-compat --add-dynamic-module=/tmp/ngx_brotli-1.0.0rc

Now make modules and copy the two files

cd objs
cp -p ngx_http_brotli_filter_module.so /usr/lib/nginx/
cp -p ngx_http_brotli_static_module.so /usr/lib/nginx/

3. Brotli compilation for Arch Linux. Go to /tmp directory.

git clone https://github.com/google/ngx_brotli.git
git submodule update --init

Go to NGINX source code: cd /usr/src/nginx, switch to root user.

./configure <output from nginx -V> --with-compat --add-dynamic-module=/tmp/ngx_brotli
make
cd objs
cp -p ngx_http_brotli_filter_module.so /usr/lib/nginx/
cp -p ngx_http_brotli_static_module.so /usr/lib/nginx/
systemctl start nginx

4. NGINX configuration. For the special case w.r.t. body-size I had already written on this here: nginx: 413 Request Entity Too Large - File Upload Issue. The general structure of a NGINX configuration is like below:

some global configuration;
http {
    server A {
        list 80;
    }
    server B {
        listen 443;
    }
}

All the rewriting rules for port 80 and 443 are the same, just copied from the top server to the bottom server config.

#user http;
worker_processes  1;

error_log  /var/log/nginx/error.log;

load_module /usr/lib/nginx/ngx_http_brotli_filter_module.so;
load_module /usr/lib/nginx/ngx_http_brotli_static_module.so;


events {
    worker_connections  1024;
}


http {
    root   /srv/http;
    index  index.html;
    client_max_body_size 15900M;

    http2 on;
    gzip  on;
    brotli on;
    brotli_comp_level 10;
    brotli_types application/xml image/svg+xml text/css text/csv text/javascript text/markdown text/plain text/vcard text/xml;
    gzip_types application/xml image/svg+xml text/css text/csv text/javascript text/markdown text/plain text/vcard text/xml;

    fastcgi_cache_path /var/cache/nginx/ keys_zone=nginxpc:720m inactive=720m;
    fastcgi_cache_key "$request_method$request_uri";
    fastcgi_cache nginxpc;
    fastcgi_cache_valid 720m;


    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';
    log_format hiawatha_format '$remote_addr|$time_local|$status|$bytes_sent|$request|$http_referer|$http_user_agent|$host:$server_port|$https';
    access_log  /var/log/nginx/access.log hiawatha_format;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    http3 on;
    http3_hq on;
    types_hash_max_size 4096;

    server {
        listen       80;
        server_name  localhost;

        rewrite "^(/*)$" "/blog" redirect;
        rewrite "^/aux/search.php$" "/rewrite/sndsaaze/public/aux/search.php" last;
        rewrite "^/(404\.html|feed\.xml|sitemap\.html|sitemap\.xml)$" "/rewrite/sndsaaze/public/index.php?/$1" last;
        rewrite "^/(aux|blog|music|gallery)($|/.*)"  "/rewrite/sndsaaze/public/index.php?/$1$2" last;

        #charset koi8-r;

        error_page  404              /rewrite/sndsaaze/public/index.php?/404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }

        location ~ \.php$ {
            try_files $fastcgi_script_name =404;

            # default fastcgi_params
            include fastcgi_params;

            # fastcgi settings
            fastcgi_pass			unix:/run/php-fpm/php-fpm.sock;
            fastcgi_buffers			8 16k;
            fastcgi_buffer_size		32k;

            # fastcgi params
            fastcgi_param DOCUMENT_ROOT	$realpath_root;
            fastcgi_param SCRIPT_FILENAME	$realpath_root$fastcgi_script_name;
        }

        location ~ ^/ttyd(.*)$ {
            proxy_http_version 1.1;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_pass http://eklausmeier.goip.de:7681/$1;
        }
    }


    server {
        listen       443 quic;
        listen       443 ssl;
        server_name  localhost;

        rewrite "^(/*)$" "/blog" redirect;
        rewrite "^/aux/search.php$" "/rewrite/sndsaaze/public/aux/search.php" last;
        rewrite "^/(404\.html|feed\.xml|sitemap\.html|sitemap\.xml)$" "/rewrite/sndsaaze/public/index.php?/$1" last;
        rewrite "^/(aux|blog|music|gallery)($|/.*)"  "/rewrite/sndsaaze/public/index.php?/$1$2" last;

        ssl_certificate      /etc/hiawatha/eklausmeier.goip.de.pem;
        ssl_certificate_key  /etc/hiawatha/eklausmeier.goip.de.pem;

        # From https://blog.qualys.com/product-tech/2013/08/05/configuring-apache-nginx-and-openssl-for-forward-secrecy
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";

        location / {
            # used to advertise the availability of HTTP/3
            add_header Alt-Svc 'h3=":443"; ma=86400';
        }

        location ~ \.php$ {
            try_files $fastcgi_script_name =404;

            # default fastcgi_params
            include fastcgi_params;

            # fastcgi settings
            fastcgi_pass			unix:/run/php-fpm/php-fpm.sock;
            fastcgi_buffers			8 16k;
            fastcgi_buffer_size		32k;

            # fastcgi params
            fastcgi_param DOCUMENT_ROOT	$realpath_root;
            fastcgi_param SCRIPT_FILENAME	$realpath_root$fastcgi_script_name;
        }

        location ~ ^/ttyd(.*)$ {
            proxy_http_version 1.1;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_pass http://eklausmeier.goip.de:7681/$1;
        }
    }

}

It is a common error to forget:

brotli_types application/xml image/svg+xml text/css text/csv text/javascript text/markdown text/plain text/vcard text/xml;
gzip_types application/xml image/svg+xml text/css text/csv text/javascript text/markdown text/plain text/vcard text/xml;

See the examples below, where this configuration has been forgotten:

  1. Analysis of Performance of Demo Open E-Mobility Site
  2. Performance Remarks on PublicoMag Website

Method to check for compression:

curl -D - -H "Accept-Encoding: gzip,deflate,br" --write-out "%{size_download}\n" -o /tmp/prism-css.br http://localhost/jscss/prism.css

5. Caching. By using fastcgi_cache with a quite large retention interval of 720 minutes (=12 hours), I keep already generated pages in the cache for a long time.

If you want to delete specific entries in the cache, the critical line is:

fastcgi_cache_key "$request_method$request_uri";

You can compute the file-name of the cache file by specifying request-method and URL like so:

printf "GET/blog" | md5sum

or

printf "GET/blog/2023/08-29-from-hiawatha-to-nginx" | md5sum

This will print the file-name which you can delete with rm. In our case in the directory /var/cache/nginx. See How to Setup FastCGI Caching with Nginx on your VPS.

6. Deployment. With this very aggressive caching in place, the deployment of my blog changed. Previously, I generated all static files with

php saaze -mortb /tmp/build

and then deployed via

blogdeploy -p

Above deployment script essentially just removes the previous directories and replaces them with the newly generated ones. I did this for the "staging environment" and "production", i.e., on my work-PC and on the self-hosting PC.

Now I rarely generate all static files and use the dynamic mode of Simplified Saaze, i.e., Simplified Saaze generates the HTML file whenever it is actually accessed. Once it is generated then NGINX caches it. So, essentially the generation of the static files is deferred to the actual access time:

$ ls -l /var/cache/nginx | wc
    350    3143   26826
]]>
https://eklausmeier.goip.de/blog/2023/08-28-crucial-4tb-ssd-in-asrock-a300m https://eklausmeier.goip.de/blog/2023/08-28-crucial-4tb-ssd-in-asrock-a300m Crucial 4TB SSD in Asrock A300M Mon, 28 Aug 2023 16:30:00 +0200 Task at hand: Increase SSD storage on Asrock A300M mini-PC, as existing SSD is 90% full.

Solution: Buy a new 2 TB SSD, or use an existing 2 TB SSD, for example a Samsung.

Bad idea: Buy a new 4 TB SSD from Crucial and insert it into the A300M.

1. Problem statement. Since May 2020 I own an Asrock A300M mini PC with a Ryzen 3400G CPU. It's a nice and reliable computer, which is used for hosting this blog. There were two problems with the disk:

  1. I had a major pacman-upgrade issue, with hundreds of zero-sized packages left on disk.
  2. The 2 TB Viper disk was too small, disk was more than 90% full.

I searched for SSD's and found a Crucial 4 TB SSD, which was advertised to work within the A300M-STX. This was quite remarkable as the the Asrock website for the A300M offers no 4 TB SSD. See Storage QVL. Below is the screenshot of the obviously false claim that the Crucial 4 TB SSD works in the A300M.

Photo

So I ordered this 4 TB Crucial SSD for 182 EUR. For comparison, here are the SSD prices for my older SSD.

Date Model Price in EUR
26-Jul-2023 4TB Crucial P3 SSD M.2 2280 PCIe 3.0 x4 3D-NAND QLC 182.24
25-Apr-2022 2TB Samsung PM9A1 M.2 PCIe 4.0 x4 3D-NAND TLC 256.92
25-May-2020 2TB 3.0/3.1G Viper VPN100 M.2 PAT 329.00

One can clearly see that prices for SSDs have gone down significantly.

2. Problems with the Crucial SSD.

The new Crucial 4TB SSD:

Opening the Asrock A300M after 3 years of uninterrupted service. The fan has accumulated quite some dust.

Mounting the Crucial 4TB SSD in the A300M:

The "old" 2TB SSD from Viper.

The new 4 TB Crucial SSD makes the A300M completely unresponsive, i.e., the A300M does not boot at all.

I tried a number of countermeasures:

  1. Update BIOS in A300M: from p3.50 to p3.70 to p.370b
  2. Mount Crucial on different M.2 interface
  3. I contacted the German customer support

All to no avail.

I checked, whether the Crucial SSD is working properly by putting it into my work PC: It worked flawlessly. So, the aformentioned advertisement is wrong. The Crucial does not work in the Asrock A300M.

Crucial customer support, part of Micron Commercial Products Group, obiously does not understand the problem and responds with generic text fragments.

3. Remedy. I put the new Crucial 4 TB into my work PC. From the work PC I used the "old" Samsung 2 TB:

After so many tries with negative results I am testing the A300M without case for some time:

Reassembling.

After intensive testing the A300M has now two SSD and also 4 TB of SSD storage. Output of lsblk -f is as below:

NAME           FSTYPE      FSVER LABEL      UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
nvme0n1
├─nvme0n1p1    vfat        FAT32            A743-0700
└─nvme0n1p2    crypto_LUKS 2                70260d27-bc13-44dd-9b30-168c2be7c72f
  └─viper      ext4        1.0              dac919c6-2f0f-466b-ada8-692ce6d16d91  983.9G    42% /mnt/viper
nvme1n1
├─nvme1n1p1    vfat        FAT32 BOOT_FAT32 EB01-74DD                             116.1M    54% /boot
└─nvme1n1p2    crypto_LUKS 2                9b0766ca-06ce-41d6-9b46-04c66573f3aa
  └─Samsung2TB ext4        1.0              63669b64-5753-44a6-8626-561a6c98ab5b  466.6G    70% /
]]>
https://eklausmeier.goip.de/blog/2023/08-27-mixing-php-into-markdown https://eklausmeier.goip.de/blog/2023/08-27-mixing-php-into-markdown Mixing PHP into Markdown Sun, 27 Aug 2023 16:00:00 +0200 Markdown is a simple language to write documents, which are finally converted to HTML. There are many conversion programs to convert from Markdown to HTML. This blog uses MD4C for this.

The CommonMark specification says:

An HTML block is a group of lines that is treated as raw HTML (and will not be escaped in HTML output).

Start condition: line begins with the string <?.

End condition: line contains the string ?>.

So it is possible to embed PHP in Markdown. Unfortunately, not every construct in Markdown passes PHP through undisturbed. For example, links and images, i.e., [reftext](ref) and ![](/imgref) destroy the PHP start- and endtags <? and ?>. Luckily, these small glitches can be cured with some string-replacements.

1. Examples. Embedding PHP code in Markdown allows us to write something like this, below is Markdown:

<?php
    $pkgList = explode("\n",`pacman -Q`);
    $pkg = array();
    foreach ($pkgList as $e) { $f=explode(' ',$e); $pkg[$f[0]??'x'] = $f[1]??''; }
?>

Using neovim version <?=$pkg['neovim']?>.

That's exactly what is done in /uses.

Another example is adding information based on time:

On <?=date('d-M-Y')?> this blog has <?= `find ~klm/php/sndsaaze/content -name \*.md | wc -l` ?> entries.

Above code is used in Blog Anniversary: 500 posts.

Below code also shows the use of PHP within Markdown:

<?php $chap=0; $subchap=0; ?>

# <?=++$chap?>. First chapter
# <?=++$chap?>. Second chapter
## <?=$chap.'.'.(++$subchap)?> Subchapter

This produces:

<h1>1. First chapter</h1>
<h1>2. Second chapter</h1>
<h2>2.1 Subchapter</h2>

When PHP code is included in HTML code, which is legal in Markdown, then no "escaping" with a star (*) is required.

PHP code in HTML code can be included verbatim. For example:

<p>2021-04-02 <a href="<?=$rbase?>/pkg/jpilot_2.0.1-1_amd64.deb">jpilot_2.0.1-1_amd64.deb</a>

Here is an example of PHP code in the URL part in Markdown:

The Markdown for _Simplified Saaze_ can also [contain PHP code](*<?=$rbase?>*/blog/2023/08-27-mixing-php-into-markdown)!

Embedding PHP in ordinary text is also no problem. No escaping with a star (*) is required.

2. Implementation. Within ordinary paragraph text it is easy to just embed PHP, which is passed through to HTML unchanged. In references I now use this character combination to later string-replace any glitches introduced by the Markdown-to-HTML conversion.

[reference text](*<?=$rbase?>*/htmlRef)

With these added asterisks I then later replace any conversion errors. Previously I just used below code to include HTML in Simplified Saaze's templates:

<?= $entry['content'] ?>

Now I use (please mentally uppercase 3c and 3e):

$s = str_replace('*%3c?','<?',$entry['content']);
$s = str_replace('?%3e*','?>',$s);
require 'data:text/plain;base64,'.base64_encode($s);

When using require with data:text then you have to activate this in php.ini:

allow_url_include = On

See allow_url_include.

More information on data-wrappers is here: data:// and the comment by brainbox. Furthermore see PHP include and the comment by sPlayer. RFC 2397 details data:[<mediatype>][;base64],<data>.

]]>
https://eklausmeier.goip.de/blog/2023/08-13-installing-openlitespeed-on-arch-linux https://eklausmeier.goip.de/blog/2023/08-13-installing-openlitespeed-on-arch-linux Installing OpenLiteSpeed on Arch Linux Sun, 13 Aug 2023 22:00:00 +0200 Unfortunately the AUR package for installing OpenLiteSpeed is broken. Additionally, the manual installation of OpenLiteSpeed via self-compilation is a mess.

1. Downloading rpm. Here we describe using the rpm repository of /edge/centos/8/x86_64/RPMS/. Essentially, this is a precompiled binary in rpm-form. Our version is openlitespeed-1.7.16-1.el8.x86_64.rpm. Another good candidate is openlitespeed-1.7.17-1.el9.x86_64.rpm in /centos/9/x86_64/RPMS/.

2. Unpacking rpm. In the following ols stands for the downloaded rpm-file without the rpm-suffix.

  1. Convert rpm to cpio: rpm2cpio ols.rpm > ols.cpio.zstd
  2. Uncompress file with unzstd old.cpio.zstd
  3. Unpack resulting cpio-archive with cpio -idmv < ols.cpio
  4. Move directory ./usr/local/lsws to /usr/local/lsws, and chown -R root:root if you haven't created the files as root user

3. Start webserver.

  1. Create Unix group lsadm, and user lsadm with /bin/nologin shell
  2. conf-directory: chown -R lsadm:lsadm /usr/local/lsws/conf
  3. Check missing libraries: ldd bin/openlitespeed, I had to install missing libcrypt.so.1 library via pacman -S libxcrypt-compat
  4. Copy systemd-service for start- and stop, enable service:
cp -p /usr/local/lsws/admin/misc/lshttpd.service /usr/lib/systemd/system/lshttpd.service
systemctl enable lshttpd

The litespeed process changes chmod of various config files to be executable. This is just silly, but you have no chance to correct it, as with every restart of litespeed the chmod is changed again. In total OpenLiteSpeed needs 67 MB of disk space under /usr/local/lsws.

4. Admin console. The original installation contains some glitches, which need to be corrected. Though, overall the admin console is not very useful.

  1. Admin console needs below symbolic links: Go to /usr/local/lsws. Then mkdir -p lsphp73/bin lsphp74/bin, change to these two directories and create symlink ln -s ../../fcgi-bin/lsphp
  2. Copy a pem-file to admin/conf/, symlink webadmin.crt and webadmin.key to this file, as the admin console enforces https. Alternatively you can change admin/conf/admin_config.conf and edit keyfile and certfile
  3. Login to admin console does not work out-of-the-box, therefore edit file admin/html.open/lib/CAuthorizer.php in line 261 and change return $auth; to return true; in PHP function authUser() to fix the authorization issue
  4. Admin console logs you out all the time: in PHP function __construct() in lib/CAuthorizer.php comment out the entire if clause for if (isset($_SESSION['timeout']) ...
  5. Admin console floods the log-file with various INFO and NOTICE messages
  6. Start web-server with /usr/local/lsws/bin/lswsctrl start, webserver listens on port 8088, admin console listens on port 7080

As the admin console is basically useless, I recommend to simply disable it with disableWebadmin 1 in conf/httpd_config.conf. Put it right after the servername line.

Restarting the webserver:

  1. stop webserver with lswsctrl stop
  2. remove cache with rm -rf cachedata and any sockets with rm admin/tmp/admin.*
  3. possibly remove old log file: rm logs/*.log
  4. start with lswsctrl start.

5. PHP via FastCGI. OpenLiteSpeed comes with PHP 5.6 installed as special LS-API compiled binary. This version is way too old. Below steps configure PHP as FastCGI.

  1. Edit conf/httpd_config.conf and change user and group to http from nobody. The Linux user http is also employed by php-fpm. Change directory ownership chown -R lsadm:http conf, and chown -R http:http tmp`.
  2. Enable PHP-FPM (PHP FastCGI Process Manager): systemctl enable php-fpm
  3. Start PHP-FPM: systemctl start php-fpm
  4. Configure extprocessor for OpenLiteSpeed in conf/httpd_config.conf:
user                      http
group                     http

extprocessor fcgiphp {
   type                    fcgi
   address                 uds://run/php-fpm/php-fpm.sock
   note                    PHP FPM
   maxConns                10
   initTimeout             20
   retryTimeout            10
   respBuffer              0
   autoStart               0
}

Domain socket in extprocessor must match the value in /etc/php/php-fpm.d/www.conf. OpenLiteSpeed should not start php-fpm, therefore autostart 0. The virtual host configuration is as below. The important part is the "context" which relates to fcgi.

# Virtual host config for klmblog
# 13-Aug-2023

docRoot                   /srv/http/
enableGzip                1
enableBr                  1

errorlog $VH_ROOT/logs/error.log {
  useServer               1
  logLevel                DEBUG
  rollingSize             10M
}

accesslog $VH_ROOT/logs/access.log {
  useServer               0
  rollingSize             10M
  keepDays                30
  compressArchive         0
}

context /p/ {
  type                    fcgi
  handler                 fcgiphp
  addDefaultCharset       off
}

context / {
  location                /srv/http/
  allowBrowse             1

  rewrite  {

  }
  addDefaultCharset       off

  phpIniOverride  {

  }
}

6. Tips and tricks.

  1. The process-id's of the webserver are pidof litespeed
  2. Before accessing the webserver with any queries, check logs/error.log for any entries with [ERROR]
]]>
https://eklausmeier.goip.de/blog/2023/08-11-blog-anniversary-500-posts https://eklausmeier.goip.de/blog/2023/08-11-blog-anniversary-500-posts Blog Anniversary: 500 posts Fri, 11 Aug 2023 21:20:00 +0200 This blog now has more than 500 posts.

1. Data. Generating static HTML files with Simplified Saaze shows number of posts:

$ time php saaze -mortb /tmp/build
Building static site in /tmp/build...
        execute(): filePath=/home/klm/php/sndsaaze/content/aux.yml, nentries=6, totalPages=1, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/blog.yml, nentries=412, totalPages=21, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/gallery.yml, nentries=6, totalPages=1, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/music.yml, nentries=61, totalPages=4, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/error.yml, nentries=1, totalPages=1, entries_per_page=20
Finished creating 5 collections, 4 with index, and 504 entries (0.17 secs / 11.15MB)
#collections=5, YamlParser=0.0095/511-5, md2html=0.0153, MathParser=0.0082/504, renderEntry=504, content=504/0, excerpt=0/0
        real 0.19s
        user 0.12s
        sys 0
        swapped 0
        total space 0

The number of posts, excluding index.md.

Nr Type number
1 aux 6
2 blog 412
3 gallery 6
4 music 61
5 error 1
  sum 486

Over the years the posts are distributed as follows, not counting the 14 index.md posts. Two posts have index: false in their frontmatter, and one post has draft: true, see Simplified Saaze. Therefore these posts do not show in the count from php saaze.

Year blog music
2008 2
2012 1
2013 101
2014 47
2015 41
2016 21
2017 21
2018 16
2019 8
2020 30
2021 51 13
2022 45 21
2023 31 27
sum 415 61

2. Counting content. Below Perl script blogyrcnt counts posts per year.

#!/bin/perl -W
# Count number of posts per year

use strict;
my %H;

while (<>) {
    $H{$1} += 1 if (/\.\/(\d\d\d\d)\/\d\d\-\d\d\-/);
}

for (sort keys %H) {
    printf("%4d: %d\n",$_,$H{$_});
}

Run script as follows: Go to content/blog directory, then

$ find . | sort | blogyrcnt`

Similarly for content/music.

As of this blog has entries.

]]>
https://eklausmeier.goip.de/blog/2023/08-07-cpio-command-cheat-sheet https://eklausmeier.goip.de/blog/2023/08-07-cpio-command-cheat-sheet cpio command cheat sheet Mon, 07 Aug 2023 18:00:00 +0200 cpio is a command, which is less used than tar. So the required options are often forgotten. cpio needs the < redirection to get its archive from stdin.

  1. List content of cpio-archive: cpio -tv < archive
  2. Extract data out of cpio-archive: cpio -idmv < archive, -i is extract, -d creates required directories, -m preserves modification times

rpm-files are essentially cpio-archives or compressed cpio-archives.

]]>
https://eklausmeier.goip.de/blog/2023/08-03-a-parsec-scale-galactic-3d-dust-map-out-to-1-25-kpc-from-the-sun https://eklausmeier.goip.de/blog/2023/08-03-a-parsec-scale-galactic-3d-dust-map-out-to-1-25-kpc-from-the-sun A Parsec-Scale Galactic 3D Dust Map out to 1.25 kpc from the Sun Thu, 03 Aug 2023 14:00:00 +0200 Author

  1. Gordian Edenhofer (1,2,3)
  2. Catherine Zucker (3,4)
  3. Philipp Frank (1)
  4. Andrew K. Saydjari (3)
  5. Joshua S. Speagle (5,6,7,8)
  6. Douglas Finkbeiner (3)
  7. Torsten Enßlin (1,2) }

Institute

  1. Max Planck Institute for Astrophysics, Karl-Schwarzschild-Straße 1, 85748 Garching bei München, Germany
  2. Ludwig Maximilian University of Munich, Geschwister-Scholl-Platz 1, 80539 München, Germany
  3. Center for Astrophysics $\vert$ Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138
  4. Space Telescope Science Institute, 3700 San Martin Dr, Baltimore, MD 21218
  5. Department of Statistical Sciences, University of Toronto, Toronto, ON M5G 1Z5, Canada
  6. David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, Toronto, ON M5S 3H4, Canada
  7. Dunlap Institute for Astronomy & Astrophysics, University of Toronto, Toronto, ON M5S 3H4, Canada
  8. Data Sciences Institute, University of Toronto, Toronto, ON M5G 1Z5, Canada }

Abstract

Context. High-resolution 3D maps of interstellar dust are critical for probing the underlying physics shaping the structure of the interstellar medium, and for foreground correction of astrophysical observations affected by dust. }

Aims. We aim to construct a new 3D map of the spatial distribution of interstellar dust extinction out to a distance of 1.25kpc from the Sun. }

Methods. We leverage distance and extinction estimates to 54 million nearby stars derived from the Gaia BP/RP spectra. Using the stellar distance and extinction information, we infer the spatial distribution of dust extinction. We model the logarithmic dust extinction with a Gaussian Process in a spherical coordinate system via Iterative Charted Refinement and a correlation kernel inferred in previous work. We probe our 661 million dimensional posterior distribution using the variational inference method MGVI. }

Results. Our 3D dust map achieves an angular resolution of ${14'}$ ($N_\text{side}=256$). We sample the dust extinction in $516$ distance bins spanning 69 pc to 1250 pc. We obtain a maximum distance resolution of 0.4pc at 69pc and a minimum distance resolution of 7pc at 1.25kpc. }

Conclusions. Our map resolves the internal structure of hundreds of molecular clouds in the solar neighborhood and will be broadly useful for studies of star formation, Galactic structure, and young stellar populations. It is available for download in a variety of coordinate systems at https://doi.org/10.5281/zenodo.8187943 and can also be queried via the publicly available dustmaps Python package. }

Key words. interstellar dust -- interstellar medium -- Milky Way -- Gaia -- Gaussian processes -- Bayesian inference }

1. Introduction

Interstellar dust comprises only 1% of the interstellar medium by mass, but absorbs and re-radiates $>30%$ of starlight at infrared wavelengths (Popescu2002). As such, dust plays an outsized role in the evolution of galaxies, catalyzing the formation of molecular hydrogen, shielding complex molecules from the UV radiation field, coupling the magnetic field to interstellar gas, and regulating the overall heating and cooling of the interstellar medium (Draine2011).

Dust's ability to scatter and absorb starlight is precisely the reason why we can probe it in three spatial dimensions. It preferentially absorbs shorter wavelengths of a stellar spectrum, thus leading to stars behind dense dust clouds appearing reddened relative to their intrinsic colors. The amount by which stars behind dust clouds appear reddened allows us to infer the amount of dust extinction between us and the reddened star. In combination with distance measurements to reddened stars, we can de-project the integrated extinction measurements into a three-dimensional map of differential dust extinction.

Gaia has been transformative for the field by providing accurate distance information to more than one billion stars, primarily within a few kiloparsecs from the Sun. Precise distances not only improve our knowledge about a star's position, but they also break degeneracies inherent in the modeling of extinction and significantly reduce the extinction uncertainties (Zucker2019). Thanks to the large quantity of extinction and distance measurements available in the era of large photometric, astrometric, and spectroscopic surveys, we can now probe the 3D distribution of dust in the Milky Way on parsec scales.

A number of 3D dust maps combining Gaia and vast photometric and spectroscopic surveys already exist. These maps primarily differ in the way they account for the so-called fingers-of-god effect, or the tendency of dust structures to be smeared out along the line of sight (LOS). The effect stems from superior constraints on stars' plane-of-sky (POS) positions relative to their LOS distance uncertainties.

3D dust maps predominantly fall into two categories, each representing a trade-off between angular resolution and distance resolution: reconstructions on a Cartesian grid and reconstructions on a spherical grid. Cartesian reconstructions commonly feature less pronounced fingers-of-god but are lower in angular resolution (Vergely2022, Lallement2022, Lallement2019, Lallement2018, Capitanio2017) or encompass limited volumes of the Galaxy (Leike2020, Leike2019). Spherical reconstructions are often higher angular resolution and probe larger volumes of the Galaxy but come with more strongly pronounced fingers-of-god or similar artifacts (Green2019, Green2017, Rezaei2022, Rezaei2020, Rezaei2018, Rezaei2017, Chen2019, Chen2018, Dharmawardena2022, Leike2022).

Physical smoothness priors counterbalance the fingers-of-god effect as finger-like structures are a priori unlikely. In a Cartesian coordinate system it is comparatively easy to incorporate physical priors into the model such as the distribution of dust being spatially smooth. Smoothness priors are often incorporated using Gaussian Process (GP) priors. Sparsities and symmetries in the prior can be exploited to efficiently apply a GP on a regular Cartesian coordinate system.

Spherical coordinate systems break these sparsities and symmetries in the prior but are much better aligned with the desired spacing of voxels along the LOS. Nearby, voxels can be spaced densely while at greater distances voxels can be spaced further apart. Naively using a GP prior is infeasible and approximations either trade fingers-of-god artifacts for other artifacts (Leike2022) or are too weak to regularize the reconstructions (Green2019).

In this work, we present a 3D dust map that achieves high distance and angular resolution and probes a large volume of the Galaxy, all at a feasible computational cost. The map uses a new GP prior methodology to incorporate smoothness in a spherical coordinate system, mitigating fingers-of-god artifacts. With a spherical coordinate system we are able to probe dust beyond 1kpc while still resolving nearby dust clouds at parsec-scale resolution. In \Cref{sec:data}, we present the stellar distance and extinction estimates upon which our map is based. In \Cref{sec:priors}, we present our GP prior methodology for incorporating smoothness in a spherical coordinate system. \Cref{sec:likelihood} describes how we combine the data with our prior model and how we incorporate the distance uncertainties of stars. In \Cref{sec:posterior_inference} we describe our inference before recapitulating all approximations of the model and their implications in \Cref{sec:caveats}. Finally, in \Cref{sec:results} we present the final map and compare it to existing 3D dust maps and 2D observations.

2. Stellar Distance and Extinction Data

To construct a 3D dust map, we use the stellar distance and extinction estimates from Zhang2023, which are primarily based on the Gaia BP/RP spectra (spectral resolution R $\sim 30-100$). Zhang2023 adopt a data-driven approach to forward model the extinction, distance, and intrinsic parameters of each star given the combination of the Gaia BP/RP spectra and infrared photometry from 2MASS and unWISE (Carasco2021, DeAngeli2022, GaiaCollaboration2022, Montegriffo2022, Schlafly2019, Wright2010, Skrutskie2006). The model is trained using a subset of stars with higher resolution spectra ($R \sim 1800$) available with LAMOST (Wang2022, Xiang2022). The resulting catalog contains distance, extinction, and stellar type ($T_{eff}$, [Fe/H], $\log g$) information for 220 million stars. Throughout this work, we will denote the Zhang2023 catalog as ZGR23.

Compared to other stellar distance and extinction catalogs, the ZGR23 catalog features smaller uncertainties on the extinction estimates while still targeting a significant number of stars. Approximately 87 million ZGR23 stars have an $A_V$ uncertainty below $60$ mmag. Thus, ZGR23 achieves similar extinction uncertainties compared to the subset of $39,538$ stars in the StarHorse catalog (Queiroz2023) that have both higher resolution APOGEE spectra and Pan-STARRS1 (PS1; chambers2019) $grizy$ photometry (typical $A_V$ extinction uncertainty of $60$ mmag). While the ZGR23 catalog is limited to stars with Gaia BP/RP measurements, the quality of the data makes the inference from the ZGR23 catalog competitive with models based on catalogs with larger numbers of stars --- 799 million stars in Bayestar19 (Green2019), 265 million in StarHorse DR2 (Anders2019), and 362 million in StarHorse EDR3 (Anders2022). We further find the ZGR23 catalog to have fewer systematic shifts in the extinction and reliable extinction uncertainties based on an analysis in dust-free regions; see \Cref{appx:zgr23_in_dust_free_regions} for further details.

For our reconstruction we restrict our analysis to ZGR23 stars that have quality_flags<8 as recommended by the authors. We further subselect the stars based on their distance. We require ${1}/{(\omega-\sigma_\omega)}<1.8\,\hbox{kpc}$ and ${1}/{(\omega+\sigma_\omega)}>40\,\hbox{pc}$ with $\omega$ the parallax of a star and $\sigma_\omega$ the parallax uncertainty to enforce that all stars are likely within our reconstructed volume. In total, we select 53,880,655 stars.

The reliability of our reconstruction is predominantly limited by the quality and quantity of the data. Both strongly depend on the POS position and distance. \Cref{fig:density_of_stars} shows 2D histograms of stellar density in heliocentric Galactic Cartesian (X, Y, Z) projections, as well as the number of stars as a function of distance. The densities of stars per distance bin first increases approximately quadratically with distance before falling off to a linear increase. At approximately $1.5\,\hbox{kpc}$ the number of stars per distance bin levels off before we start deselecting stars by requiring that they have a >1 sigma chance of being within 1.8kpc in distance. \Cref{fig:density_of_stars_mollview} shows a POS histogram of the stars. A clear imprint of the Gaia BP/RP selection function is visible, cf. CantatGaudin2022. A systematic undersampling of stars behind dense dust clouds is also apparent. We expect our reconstruction to be more trustworthy in regions of higher stellar density. Due to the obscuring effect of dust, regions within and behind dense dust clouds should be treated with more caution.

Heliocentric Galactic Cartesian (X, Y, Z) projected histograms.

Photo}} }

Number of stars as a function of distance.

Photo}} } \caption{% 2D histograms of the density of stars in heliocentric Galactic Cartesian (X, Y, Z) projections, as well as the density of stars as a function of distance, for the subset of the ZGR23 catalog used in the reconstruction of our 3D dust map. The latter visualization additionally shows a linear growth and a quadratic growth with distance for comparison. }

Photo}} \caption{% Plane-of-sky distribution of the subset of ZGR23 stars used in the reconstruction of our 3D dust map. }

3. Priors

Our quantity of interest is the 3D distribution of differential ZGR23 extinction $\rho$.\footnotemark By definition the differential extinction is positive. Furthermore, we assume it to be spatially smooth. A priori we assume the level of smoothness to be spatially stationary and isotropic.

The ZGR23 extinction is in arbitrary units but can be translated to an extinction at any given wavelength by using the extinction curve published at https://doi.org/10.5281/zenodo.6674521. Furthermore, dust extinction can be translated to a rough hydrogen volume density by assuming a constant extinction to hydrogen column density ratio (see e.g. Zucker2021). }

To reconstruct the 3D volume efficiently, we discretize it in spherical coordinates. Specifically, we discretize our reconstructed volume into HEALPix spheres at logarithmically spaced distances. We adopt an $N_\text{side}$ of $256$, corresponding to $786,432$ POS bins. This $N_\text{side}$ corresponds to an angular resolution of $14'$.\footnotemark For the LOS direction, we adopt $772$ logarithmically spaced distance bins of which $256$ are used for padding. In contrast to reconstructions with linearly-spaced voxels in distance, we are able to probe much larger volumes while maintaining high resolution at nearby distances.

The angular resolution of $14'$ refers to the angular size of our voxels. It provides a lower bound on the minimum separation between dust structures that we are able to resolve. In practice the resolution is highly position dependent and is predominantly driven by the quantity and quality of the data. }

We encode both positivity and smoothness in our model by assuming the differential extinction to be log-normally distributed

$$ \begin{equation} \rho = \exp{s} \end{equation} $$

with normally distributed $s$, where $s$ is drawn from a Gaussian process with homogeneous and isotropic correlation kernel $k$. From previous reconstructions of the differential extinction for the Gaia DR2 G-band $A_G$ (Leike2020), we have constraints on the correlation kernel of the logarithm of the differential extinction in a volume around the Sun ($|X| < 370pc,, |Y| < 370pc,, |Z| < 270\,\hbox{pc}$). As part of our prior model we use the inferred $A_G$ extinction kernel from Leike2020. To account for the conversion between the ZGR23 extinction and $A_G$ extinction, we add a global multiplicative factor to $s$ in our model.\footnotemark Furthermore, we infer an additive offset in the differential extinction. We place a log-normal prior on the multiplicative parameter and a normal prior on the additive one

By doing so (and by using ZGR23) we implicitly assume a spatially stationary reddening law for dust. }

We enforce the correlation kernel $k$ using Iterative Charted Refinement (ICR) (Edenhofer2022). ICR enables us to enforce a kernel on arbitrarily spaced voxels by representing the modeled volume at multiple resolutions. It starts from a very coarse view of our modeled volume. On this coarsest scale, ICR models the GP with learned voxel excitations $\xi_e^{(0)}$ and an explicit full kernel covariance matrix. A priori the parameters $\xi_e^{(0)}$ are standard normally distributed and coupled according to $k$ via ICR. It then iteratively refines $n_\text{lvl}$ times its coarse view of the space with local, fine, a priori standard normally distributed corrections $\xi_e^{(1)},\dots,\xi_e^{(n_\text{lvl})}$ until reaching the desired resolution. In each refinement it uses $n_\mathrm{csz}$ neighbors from the previous refinement to refine one coarse pixel into $n_\mathrm{fsz}$ fine pixels. See \Cref{alg:icr}.

Pseudocode for ICR creating a GP $s$ from uncorrelated excitations $\left\{ \xi_e^{(0)},\dots,\xi_e^{(n_\mathrm{lvl})} \right\}$. Each coarse pixel at location $j$ is iteratively refined to $n_\mathrm{fsz}$ fine pixels using $n_\mathrm{csz}$ coarse pixel neighbors. The correlation kernel is denoted by $k$. Square brackets after variables and the two functions ndindex and shape denote NumPy-like (Harris2020) indexing routines. The call explicit\_gp refers to an unspecified Gaussian Process model explicitly representing the covariance of $k$ for the pixel positions modeled by $\xi^{(0)}_e$. }

ICR uses local corrections at varying resolutions and within a refinement assumes the previous iteration to have modeled the GP without error. Both lead to slight errors in representing the kernel. For our use case we encounter errors in representing the kernel of a few percent. We accept these errors as a trade-off that enables the reconstruction to probe larger volumes. We refer to Edenhofer2022 for a detailed discussion of the kernel approximation errors.

Overall, our model for the prior reads

$$ \begin{equation} \rho = \exp{\left[\mathrm{scl}(\xi_\mathrm{scl}) \cdot s\left(\xi_e^{(0)},\dots,\xi_e^{(n_\mathrm{lvl})}\right) + \mathrm{off}(\xi_\mathrm{off})\right]}, \end{equation} $$

where we denote the learned multiplicative scaling of $s$ by $\mathrm{scl}$, the learned additive offset by $\mathrm{off}$, and re-expressed both in terms of a priori standard normally distributed parameters $\xi_\mathrm{scl}$ and $\xi_\mathrm{off}$ respectively. The act of expressing $\mathrm{scl}$, $\mathrm{off}$ and $s$ via parameters with an a priori simpler distribution, here a standard normal distribution, is called re-parameterization. See Rezende2015 for a detailed discussion on this subject.

4. Likelihood

To construct the likelihood we first need to define how the differential extinction $\rho$ --- our quantity of interest --- connects to the measured data $\mathcal{D}$. Our data comprises POS position, extinction $\mathcal{D}_A$, and parallax $\mathcal{D}_\omega$ data. The POS position is in essence without error. The extinction data $\mathcal{D}_A=\{A,\sigma_A\}$ is in the form of integrated LOS extinctions to stars $A$ and associated uncertainties $\sigma_A$ The parallax data $\mathcal{D}_\omega=\{\omega, \sigma_\omega\}$ similarly is in the form of parallax estimates $\omega$ and uncertainties $\sigma_\omega$.

In our model, we focus on the measured extinction $A$ and do not predict parallaxes to stars. Instead, we condition our model on the parallax data $\mathcal{D}_\omega$ and split the likelihood into the probability of the measured extinction given the true extinction $a$ and the probability of the true extinction given uncertain parallax information

$$ \begin{align} P(A\,|\,\rho,\mathcal{D}_\omega) &= \int\mathrm{d}{a}\ P(A,a\,|\,\rho,\mathcal{D}_\omega) \\ &= \int\mathrm{d}{a}\ P(A\,|\,a) \cdot P(a\,|\,\rho,\mathcal{D}_\omega)\ . \label{eq:top_level_likelihood} \end{align} $$

The first term of the integrand is constrained by the quality of the extinction measurements and the second by the quality of the parallax measurements.

4.1 Response

The second term in \Cref{eq:top_level_likelihood} $P(a,|,\rho,\mathcal{D}_\omega)$ can be expressed as the joint probability of extinction and true distance $d$ marginalized over the true distance

$$ \begin{align} P(a\,|\,\rho,\mathcal{D}_\omega) &= \int\mathrm{d}d\ P(a,d\,|\,\rho,\mathcal{D}_\omega) \\ &= \int\mathrm{d}d\ P(a\,|\,\rho,\mathcal{D}_\omega,d) \cdot P(d\,|\,\rho,\mathcal{D}_\omega) \ . \end{align} $$

We neglect data selection effects, i.e. $a$'s dependence on $\mathcal{D}_\omega$ given $d$ and $d$'s dependence on $\rho$ given $\mathcal{D}_\omega$, and use that the true extinction $a$ at known distance $d$ is simply the LOS integral of $\rho$ along the LOS to the star from zero to $d$

$$ \begin{align} P(a|\rho,\mathcal{D}_\omega) &= \int\mathrm{d}d\ P(a|\rho,d) \cdot P(d|\mathcal{D}_\omega) \\ &= \int\mathrm{d}d\ \delta\left( a - \underbrace{\int_0^{d}\mathrm{d}\tilde{d}\ \rho[\mathrm{POS}](\tilde{d})}_{:= R^{d}(\rho)} \right) \cdot P(d|\mathcal{D}_\omega) \end{align} $$

with $\rho[\mathrm{POS}]$ the slice of $\rho$ at the POS positions of the stars, $\delta$ the Dirac delta distribution defined by $\int_{-\infty}^{\infty}\mathrm{d}x\ f(x)\delta(x)=f(0)$ for any continuous $f$ with compact support, and $R$ the response which maps from $\rho$ to the domain of the measured extinction.

We approximate $P(a|\rho,\mathcal{D}_\omega)$ with a normal distribution

$$ \begin{equation} P(a|\rho,\mathcal{D}_\omega) \approx \mathcal{G}\left(a|\bar{a},\sigma_a^2\right) \end{equation} $$

with mean $\bar{a}$ and standard deviation $\sigma_a$ to obtain a tractable expression for \Cref{eq:top_level_likelihood}. The mean extinction $\bar{a}$ is

$$ \begin{align} \bar{a} &:= {\langle a \rangle}_{P(a|\rho,\mathcal{D}_\omega)} \\ &= \int\mathrm{d}a\,a\int\mathrm{d}d\ \delta\left( a - R^{d}(\rho) \right) \cdot P(d|\mathcal{D}_\omega) \\ &= \int\int\mathrm{d}a\,\mathrm{d}d\ a \cdot \delta\left( a - R^{d}(\rho) \right) \cdot P(d|\mathcal{D}_\omega) \\ &= \int\mathrm{d}d\ R^{d}(\rho) \cdot P(d|\mathcal{D}_\omega) \\ &= {\left\langle R^{d}(\rho) \right\rangle}_{P(d|\mathcal{D}_\omega)} \ . \end{align} $$

Assuming the parallax ${1}/{d}$ is normally distributed, i.e. $P(d|\mathcal{D}_\omega)=\mathcal{G}\left({1}/{d} \vert \omega, \sigma_\omega^2\right)$ with mean $\omega$ and standard deviation $\sigma_\omega$, then

$$ \begin{align} {\langle a \rangle}_{P(a|\rho,\mathcal{D}_\omega)} &= {\left\langle R^{d}(\rho) \right\rangle}_{\mathcal{G}({1}/{d} \vert \omega, \sigma_\omega^2)} \\ &= \int_0^{\infty}\mathrm{d}\tilde{d}\ \rho[\mathrm{POS}](\tilde{d}) \cdot \mathrm{sf}_\mathcal{G}\left({1}/{\tilde{d}} \vert \omega, \sigma_\omega^2\right) \end{align} $$

with $\mathrm{sf}_\mathcal{G}\left({1}/{d} \vert \omega, \sigma_\omega^2\right) := 1 - \int_{-\infty}^{{1}/{d}}\mathrm{d}\omega'\ \mathcal{G}\left(\omega' \vert \omega, \sigma_\omega^2\right)$ the survival function of the normal distributed parallax.

The standard deviation $\sigma_{a}$ can be understood as an additional error contribution for marginalizing over the distance. The error depends on the distance uncertainty and the dust along the full LOS

$$ \begin{equation} \sigma_{a}^2 := {\left\langle {\left(R^{d}(\rho)\right)}^2 \right\rangle}_{\mathcal{G}\left({1}/{d} \vert \omega, \sigma_\omega^2\right)} - {\langle R^{d}(\rho) \rangle}^2_{\mathcal{G}\left({1}/{d} \vert \omega, \sigma_\omega^2\right)} \ . \end{equation} $$

Evaluating both $\bar{a}$ and $\sigma_a^2$ is comparatively cheap in a spherical coordinate system since for a discretized sphere $R^{d}(\rho)$ is simply the cumulative sum of $\rho$ along the distance axis weighted by the radial extent of each voxel.

4.2 Likelihood and Joint Probability Density

We assume the measured extinction to be normally distributed around the true extinction $a$. We take the inferred extinction $A$ from the catalog to be the mean of the normal distribution. The accompanying uncertainty $\sigma_A$ in the catalog is assumed to be the standard deviation of $P(A,|,a)$.

Some stars will have underestimated uncertainties either due to mismodeled intrinsic stellar properties in the inference or bad photometric measurements that were not flagged. We want our model to be able to detect and deselect stars which are in strong disagreement with the rest of the reconstruction. We do so by inferring an additional multiplicative factor per star $n_\sigma$ which scales $\sigma_A$. A priori, we assume $n_\sigma$ to be drawn from a heavy-tailed distribution. Specially, we assume $n_\sigma$ to follow an inverse gamma distributed. We again express $n_\sigma$ in terms of standard normally distributed parameters $n_\sigma(\xi_\sigma)$ in the inference.

To summarize, our approximate likelihood first introduced in \Cref{eq:top_level_likelihood}, reads

$$ \begin{align} &P(A\,|\,\rho,n_\sigma,\mathcal{D}_\omega) \\ &\approx \int\mathrm{d}{a}\ \mathcal{G}\left(A\,\left|\,a,{\left(n_\sigma \cdot \sigma_A\right)}^2\right.\right) \cdot \mathcal{G}\left(a\,\left|\,\bar{a}(\rho),\sigma_a^2(\rho)\right.\right) \\ &= \mathcal{G}\left( A \,\left|\,\bar{a}(\rho),\left[n_\sigma \cdot \sigma_A\right]^2+\sigma_a^2(\rho) \right.\right) \label{eq:total_likelihood} \end{align} $$

The uncertainty in the extinction $\sigma_{A}$ is scaled by $n_\sigma$ to deselect outliers and increased by $\sigma_{a}^2$ due to marginalizing over the distance uncertainty.

The joint probability density function of data and parameters reads

$$ \begin{align} P&(A,\rho(\xi),n_\sigma(\xi) \vert \mathcal{D}_\omega) \\ &=\ \mathcal{G}\left( A \,\left|\,\bar{a}(\rho(\xi)),\left[n_\sigma(\xi) \cdot \sigma_A\right]^2+\sigma_a^2(\rho(\xi)) \right.\right) \cdot \mathcal{G}\left( \xi \,\left|\,0, 1\right.\right) \end{align} $$

with $\xi$ the vector of all parameters of the model $\left\{\xi_e^{(0)},\dots,\xi_e^{(n_\mathrm{lvl})},\xi_\mathrm{scl},\xi_\mathrm{off},\xi_\sigma\right\}$. The complexity of the prior distributions has been fully absorbed into the transformations $s(\xi)$, $\mathrm{scl}(\xi)$, $\mathrm{off}(\xi)$, and $n_\sigma(\xi)$ from the a priori standard normally distributed parameters $\xi$.

Parameters of the prior distributions. The parameters $s$, $\mathrm{scl}$, and $\mathrm{off}$ fully determine $\rho$. They are jointly chosen to a prior yield the kernel reconstructed in Leike2020.

Name Distribution Mean Standard Deviation Degrees of Freedom
s Normal 0.0 Kernel from Leike2020 786,432 × 772
scl Log-Normal 1.0 0.5 1
off Normal $-6.91\left(\approx\ln10^{-3}\right)$
prior median extinction
from Leike2020
1.0 1
Shape Parameter Scale Parameter
$n_\sigma$ Inverse Gamma 3.0 4.0 #Stars = 53,880,655

Our priors in terms of non-standard-normal parameters are summarized in \Cref{tab:priors}. The priors for $s$, $\mathrm{scl}$, and $\mathrm{off}$ are chosen to a priori yield the kernel reconstructed in Leike2020. In contrast to Leike2020, we do not learn a full non-parametric kernel. However, we do infer $\mathrm{scl}$ and $\mathrm{off}$, the scale and zero-mode of the kernel. The prior for $n_\sigma$ is chosen such that the inverse gamma distribution has mode $1$ and standard deviation $2$.

5. Posterior Inference

In the previous section we took special care to express our model not only in terms of physical parameters, like the differential extinction density $\rho$, but also in terms of more simple parameters $\xi$. The act of expressing the parameters of the model $\mathrm{scl}$, $\mathrm{off}$, $s$, and $n_\sigma$ in terms of a priori standard normal distributed variables $\xi$ is called standardization, a special from of re-parameterization (see Rezende2015). Effectively we are shifting complexity from the prior to the likelihood. However, both the non-standardized and the standardized formulation of the joint model are equivalent. Standardizing models can lead to better conditioned inference problems as the parameters all vary on the same scales --- if the prior is not in conflict with the likelihood. We are going to use an inference scheme that relies on the standardized formulation.

We want to infer the posterior for our standardized model from \Cref{eq:joint_model}. Directly probing the posterior via sampling methods like Hamiltonian Monte Carlo Hoffman2011 is computationally infeasible. Instead, we use variational inference to approximate the true posterior. Specifically, we use Metric Gaussian Variational Inference (MGVI, Knollmueller2019). We summarize the main idea behind MGVI in \Cref{appx:mgvi}. We do not approximate the posterior of the noise inference parameter $n_\sigma(\xi_\sigma)$ via variational inference and instead use only the maximum of the posterior for $\xi_\sigma$.

To speed up the inference, we start the reconstruction at a lower resolution ($196,608$ POS bins at $N_\text{side}=128$ and $388$ LOS distance bins) and restrict the inference to a subset of stars with a $\geq 2$ sigma chance of being within 600pc and a $\geq 2$ sigma chance of being farther than 40pc. We successively increase the distance range of the map up to which stars are incorporated in steps of 300pc from 600pc to 1.8kpc. Every time we increase the distance range, we reset the parameters for $n_\sigma$. Then, after all data is incorporated, we increase the angular and distance resolution of the reconstruction to the final resolution.

Our data selection deselects stars close to the maximum distance probed (c.f. \Cref{fig:density_of_stars}). This effect leads to the outer regions of the map being informed by relatively fewer stars compared to the inner regions. We observe that these regions are prone to producing spurious features. For our final data products we remove the outermost 550pc from the data constrained volume as we observe artifacts aligned with our data incrementation strategy within these regions. We believe 550pc to be a conservative cut but we advise caution when finding structures perfectly aligned with a sphere around the sun at 600pc, 900pc, or 1200pc.

ZGR23 assumes all extinctions to be strictly positive. We neglect this constraint by assuming Gaussian errors which leads to an artificial spike in extinction in the first few voxels in each direction. As we know those regions to be effectively free of dust from previous reconstructions c.f. Leike2020, we remove the innermost HEALPix spheres until the mean POS differential extinction as a function of distance reaches a local minimum at 69pc. We release an additional HEALPix map of integrated extinction out to 69pc from the sun to correct integrated LOS predictions for the removed extinction.

Our inference heavily utilizes derivatives of various components of our model. Derivatives are used for the minimization as well as for the variational approximation of the posterior. Previous models such as Leike2019, Leike2020 relied on the Numerical Information Field Theory (NIFTy) package (Selig2013, Steiniger2017, Arras2019) and were limited to running on CPUs.

We employ a new framework called NIFTy.re (Edenhofer2023NIFTyRE) for deploying NIFTy models to GPUs. NIFTy.re is part of the NIFTy Python package and internally uses JAX (Jax2018) to run models on the GPU. We are able to speed up the evaluation of the value and gradient of \Cref{eq:joint_model} by two orders of magnitude by transitioning from CPUs to GPUs. Our reconstruction ran on a single NVIDIA A100 GPU with 80 GB of memory for about four weeks.

6. Caveats

We believe statistical uncertainties are the dominant source of uncertainty for our reconstruction. However, it is important to also consider sources of systematic uncertainties. Depending on the application, the systematic uncertainties may be more important than the statistical uncertainties. The data that informed the reconstruction, the model with which we inferred it, and the inference procedure all contribute to the model systematic uncertainties.

Naturally, the data themselves is a source of systematic uncertainties (spatially stationary reddening law, mismodeling of binaries, etc., see Zhang2023) and additionally is known to be incomplete. Given lower stellar densities in heavily obscured regions, volumes of the map behind dense dust clouds are poorly constrained by the data and limits the map's fidelity, c.f. \Cref{fig:density_of_stars_mollview}. Thus, we believe our dust reconstruction to be an underestimation of the true extinction toward dense dust clouds. Zucker2021 also note this effect when comparing the Leike2020 map with 2D integrated extinction maps based on infrared photometry, finding that the Leike2020 is not sensitive to regions with $A_V \gtrsim 2$ mag.

Our model includes a number of approximations. First, we assume a GP-prior on the logarithmic dust extinction using the kernel from Leike2020 and additionally only apply it approximately via ICR. Second, we assume $\mathcal{D}_\omega$ to be independent of $\mathcal{D}_A$. Third, we assume the parallax error to be Gaussian, and fourth, we assume the extinction error to be Gaussian.

For extremely low extinctions the assumption of $A$ being Gaussian is poor due to the positivity prior in the ZGR23 catalog. We correct for this bias towards higher estimated extinction in regions with assumed extremely low true extinctions post-hoc by cutting away the innermost 69pc as described in \Cref{sec:posterior_inference} and publish an auxiliary map of integrated extinction out to 69pc from the sun to correct integrated LOS predictions for the removed extinction.

We further release a catalog of the predicted extinctions of our model to all stars that we use for the reconstruction to allow for additional validation work. In \Cref{appx:extinction_catalog}, we perform a non-exhaustive comparison of our predictions versus the ZGR23 ones. We find that both predictions for the extinction to stars disagree below 50 mmag and above 4 mag (34% more stars than expected have larger respectively smaller extinction prediction compared to ZGR23). See \Cref{appx:extinction_catalog} for more details.

Furthermore, our posterior inference is an approximation. We assume our approximation of the true posterior accurately captures the intrinsic model uncertainties c.f. (Arras2022, Leike2019, Leike2020, Mertsch2023, Roth2023DirectionDependentCalibration, Hutschenreuter2023, Tsouros2023, Roth2023FastCadenceHighContrastImaging, Hutschenreuter2022). However, we do need to worry about structures getting burned in when we increase the maximum distance probed during the inference from 600 pc to 1800 pc in steps of 300pc as described in \Cref{sec:posterior_inference}. We check the final reconstruction for this effect by comparing it against a larger reconstruction which does not subselect the stars based on their distance during the inference but uses only a small sub-sample of ZGR23 stars with more stringent quality flags. The larger reconstruction which extends out to 2kpc in distance is released as an additional data product. We find no significant differences between both runs. See \Cref{appx:2kpc_reconstruction} for details on the larger reconstruction.

7. Results

We reconstruct 12 samples (6 antithetically drawn samples) of the 3D dust extinction distribution which each encompasses 607,125,504 differential extinction voxels. The voxels are arranged on $772$ HEALPix spheres with $N_\text{side}=256$ spaced at logarithmically increasing distances. After removing the innermost <69pc and outermost >1250pc HEALPix spheres, we are left with $516$ HEALPix spheres. The samples and the posterior mean for the reconstruction are publicly available at https://doi.org/10.5281/zenodo.8187943. We also provide the posterior mean and standard deviation of the reconstruction interpolated to heliocentric Galactic Cartesian Coordinates (X, Y, Z) and Galactic spherical Coordinates ($l$, $b$, $d$) as well as the scripts for the interpolation. Furthermore, the map can be queried via the dustmaps Python package (Green2018Dustmaps). See \Cref{appx:using_the_reconstruction} for further details on using the reconstruction.

The distance resolution in our reconstruction is highest for close-by voxels and decreases further out. Our highest distance resolution is 0.4pc and our lowest distance resolution is 7pc.\footnotemark Our angular resolution is $14'$ and is independent of the distance.

The stated highest respectively lowest distance resolutions of 0.4pc respectively 7pc refer to the minimum respectively maximum extent of the voxels in the radial direction. This is not necessarily the same as the minimum separation in distance at which we are able to resolve structures with the given data and our model. In practice the resolution is highly position dependent and is predominantly driven by the quantity and quality of the data. }

The reconstruction is in terms of the unitless ZGR23 extinction as defined in Zhang2023. For visualization purposes we translate the ZGR23 extinction to Johnson's V-band $\lambda=540.0{nm}$, i.e., $A_V := A(V=540.0{nm})$. To perform the conversion, we adopt the extinction curve published in ZGR23, and multiply the unitless ZGR23 extinction by a factor of $2.8$. We refer readers to the full extinction curve at https://doi.org/10.5281/zenodo.6674521 from Zhang2023 for the coefficients needed to translate the extinction to other bands.

Photo}} \caption{% Mollweide projection of the POS integrated $A_V$ extinction out to 250pc, 500pc, 750pc, and up to the maximum distance of our map. The colorbar saturates at the $99.9%$ quantile. }

Photo}} \caption{% Same as \Cref{fig:integrated_mollview} but showing the difference between the integrated extinctions in between distance slices projection on the POS. The colorbar saturates at the $99.9%$ quantile. }

\Cref{fig:integrated_mollview} depicts the POS projection of the posterior mean reconstruction integrated out to 250pc, 500pc, 750pc, and up to the end of our sphere. The $A_V$ values are in units of magnitudes. We see that higher-latitude features like the Aquila Rift are comparatively close-by while structures in the Galactic plane appear only gradually. \Cref{fig:diff_mollview} shows the difference between the integrated POS projections. We recover well known features of integrated dust but are now able to de-project them.

Photo}} \caption{% Heliocentric Galactic Cartesian (X, Y, Z) projections of the posterior mean of our 3D dust map in a box with dimensions $2.5kpc \times 2.5kpc \times 0.8\,\hbox{kpc}$ centered on the Sun. The colorbar is linear and saturates at the $99.9%$ quantile. A GIF of the posterior samples is shown at https://faun.rc.fas.harvard.edu/gedenhofer/perm/E+23/21b9_final.gif. A low-resolution 3D interactive version of this figure is available https://faun.rc.fas.harvard.edu/czucker/Paper_Figures/3D_Dust_Edenhofer2023.html. }

Photo}} \caption{% Same as \Cref{fig:galactic} but with a catalog of clusters of young stellar objects (Kuhn2023YSO) based on Kuhn2021, Winston2020, Marton2022 shown as blue dots on top of the reconstruction and their distance uncertainties shown as extended lines. }

Photo}} \caption{% Heliocentric Galactic Cartesian (X, Y, Z) projections of the standard deviation of the reconstructed dust extinction integrated within the box $2.5kpc \times 2.5kpc \times 0.8\,\hbox{kpc}$ centered on the Sun. The colorbar is linear and saturates at the $99.9%$ quantile. }

\Cref{fig:galactic} show a bird's eye (X, Y), side-on (X, Z), and (Y, Z) projection of the posterior mean of our reconstruction in heliocentric Galactic Cartesian coordinates. The image depicts the innermost $2.5kpc \times 2.5kpc \times 0.8\,\hbox{kpc}$ around the Sun in $A_V$ extinction integrated over $z$ from -400 pc to 400 pc, $y$ in -1.25 kpc to 1.25 kpc, and $x$ in -1.25 kpc to 1.25 kpc, respectively. In \Cref{fig:galactic_ysos} we overlay a catalog of clusters of Young Stellar Objects (YSOs) (Kuhn2023YSO) based on Kuhn2021, Winston2020, Marton2022, which are shown as blue dots. The positions of the YSO clusters visually agree with the positions of dust clouds within the YSO clusters' reported distance uncertainties. The standard deviation of the reconstruction is shown in \Cref{fig:galactic_std}. It is on the order of $10%$ of the posterior mean and tends to increase with distance.

The reconstruction has a high dynamic range and reveals faint dust lanes in the reconstructed volume. Small approximately spherical cavities are evident throughout the map. The dust clouds in the reconstruction are compact and only weakly elongated radially. Prominent large-scale features, such as the Radcliffe Wave Alves2020 and the Split (Lallement2019), have been resolved at an unprecedented level of detail, previously only accessible for the most nearby dust clouds.

7.1 Comparison to Existing 3D Dust Maps

In this section, we compare our map to other 3D dust maps in the literature. We denote the dust map by Leike2020 by LGE20, Vergely2022 by VLC22, Green2019 by Bayestar19, and Leike2022 by L+22 in this section. For the purposes of comparison, we show the posterior mean. We release the statistical uncertainties as additional data products, and we strongly advise taking into account these statistical uncertainties for any quantitative analysis. However, the differences between the various 3D dust reconstructions discussed here are systematic differences and are not captured by the reconstructed statistical uncertainties.

Photo}} \caption{% Side-by-side views of the 3D dust maps from Bayestar19, VLC22, L+22 and this work, shown in heliocentric Galactic Cartesian (X, Y, Z) projections. The colorbars are saturated at the $99.9%$ quantile of the respective reconstruction. }

In \Cref{fig:sidebyside}, we show 3D (X, Y, Z) projections of the maps, comparing Bayestar19, VLC22, and this work side-by-side. All four maps agree on the general structure of the distribution of dust.

This work, L+22, and VLC22 have a comparable distance resolution while Bayestar19 only features comparatively few distance bins and more strongly pronounced fingers-of-god. Compared to L+22, we feature more homogenously extended dust clouds and significantly less radial wiggles in the distances to dust clouds. Compared to VLC22, we feature more compact dust clouds, less grainy structures, and a higher dynamic range. Both this work and VLC22 feature dust clouds in a comparable volume around the sun despite the VLC22 map technically extending out further in Galactic heliocentric X and Y.

Photo}} \caption{% Zoomed-in version of \Cref{fig:sidebyside} for the volume reconstructed in Leike2020, now also showing the LGE20 reconstruction for comparison. We leave out L+22 from the comparison because the authors explicitly focus on larger volumes and trade strongly pronounced artifacts in the inner couple hundred parsecs for a larger probed volume. The colorbars are again saturated at the $99.9%$ quantile of the respective reconstruction. }

\Cref{fig:sidebyside_leike2020box} shows the same projections for the volume reconstructed in the LGE20 map and includes the LGE20 map for comparison. The zoom-in highlights the close similarity between this work and the LGE20 map. All larger structures have direct correspondences in the other map, yet the distances to the structures are slightly different. Furthermore, the LGE20 map appears slightly sharper. The model in LGE20 is very similar to ours but uses fewer approximations. LGE20 also uses compiled data (StarHorse DR2, see Anders2019). More work is needed to asses the validity of the sharper features in LGE20 not present in this work. The VLC22 map is in good agreement as well but lower resolution. Bayestar19 poorly resolves distances at the scale of the LGE20 map.

Photo}} \caption{% Mollweide projections of total integrated extinction and 3D extinction maps integrated out the maximum distance of the respective map. Bayestar19 reconstructs up to a maximum distance of 63kpc (maximum reliable distance 10kpc) and is integrated out to that volume. L+22 reconstructs up to a maximum distance of 16kpc but the authors trust their map only out to 4kpc and we integrate their map only to 4kpc. VLC22 reconstructs a heliocentric box of size $3kpc \times 3kpc \times 800\,\hbox{pc}$ with $10^3\,\hbox{pc}^3$ voxels with at most $<2.16\,\hbox{kpc}$ in distance and is integrated out to the end of the box. Likewise, LGE20 reconstructs a heliocentric box of size $\rm |X| < 370pc, |Y| < 370pc, |Z| < 270\,\hbox{pc}$ covering at most $<590\,\hbox{pc}$ in distance and is integrated out to the end of the box. The colorbars saturate at the respective $99%$ quantile of the map except for the colorbar of Planck 2013 which saturates at 5 mag for better comparability. }

In \Cref{fig:mollview_versus} we compare the POS view of Bayestar19, L+22, VLC22, LGE20 and this work. The respective POS views are integrated out to the maximum distance probed by each map --- $<63\,\hbox{kpc}$ in distance for Bayestar19 (maximum reliable distance 10kpc), $<4\,\hbox{kpc}$ in distance for L+22 (authors trust structures up to 4kpc though the map extends to 16kpc), a heliocentric box of size $3kpc \times 3kpc \times 800\,\hbox{pc}$ with $10^3\,\hbox{pc}^3$ voxels for VLC22 with at most $2.16\,\hbox{kpc}$ in distance, and a heliocentric box of size $\rm |X|, |Y| < 370pc, |Z| < 270\,\hbox{pc}$ and up to $590\,\hbox{pc}$ in distance for LGE20. In addition, we show the Planck 2013 extragalactic dust map (Planck2013) and the Gaia Total Galactic Extinction (TGE) 2022 map (Delchambre2022).

All maps agree on fine structures at high galactic latitudes but differ in the Galactic plane due to the difference in distance up to which the respective reconstruction extends. 3D dust reconstructions do not probe deep enough into the Galactic plane to fully recover the Planck 2013 extragalactic dust map. Bayestar19 and L+22 probe much deeper than VLC22, LGE20 and this work, yet they do not probe the full column of dust seen in Planck 2013 and Gaia TGE 2022. Both VLC22 and our map probe up to a similar depth while LGE20 only probes dust at much closer distances.

Photo}} \caption{% Zoomed-in views toward individual molecular clouds (Perseus, Orion, Taurus, Corona Australis, and Chamaeleon) seen in \Cref{fig:mollview_versus}. The colorbars are logarithmic and span the full dynamic range of the selected POS slice in every image. Each row is a separate region and each column a separate reconstruction. }

\Cref{fig:regions_of_interest} shows a zoom-in comparison of the Perseus, Orion A, Taurus, Corona Australis (CrA), and Chamaeleon molecular clouds, integrated out to the maximum distance of each map (4kpc for L+22). Among the 3D dust reconstructions, Bayestar19 and L+22 have arguably the highest angular resolution with 3.4' ($N_\text{side}=1024$) and 1.9' respectively. They resolves the high latitude dust clouds with great detail although L+22 suffers from localized artifacts in patches of the sky. Both LGE20 (1\,\hbox{pc}^3 boxes) and this work ($N_\text{side}=256$) achieve a comparable angular resolution. The VLC22 reconstruction (10^3\,\hbox{pc}^3 voxels) is noticeably lower in resolution and does not resolve the cloud substructure on the POS.

8. Conclusions

We present a 3D dust map with a POS and LOS resolution comparable to Leike2020 that extends out to 1.25kpc. We use the distance and extinction estimates of Zhang2023 which have much lower extinction uncertainties than competing catalogs while probing a similar number of stars. Our reconstruction has an angular resolution of $14'$ and a distance resolution of up to 0.4pc. We show the map to be in good agreement with existing 3D dust maps and improves upon them in terms of covered volume and spatial resolution. The map is made publicly available at https://doi.org/10.5281/zenodo.8187943 and can also be queried via the dustmaps Python package. We anticipate that the map will be useful for a wide range of applications in studying the distribution of dust and the ISM more broadly.

We thank Joao Alves for many fruitful discussions at the "Self-Organization Across Scales: From nm to parsec (SOcraSCALES)'' workshop at the Munich Institute for Astro-, Particle and BioPhysics, an institute of the Excellence Cluster ORIGINS in 2022 and afterwards. Furthermore, we thank Jakob Roth for many invaluable discussions about the model and for providing feedback on the early versions of the reconstruction. We thank Alyssa Goodman for providing invaluable feedback on the late versions of the reconstructions. We also thank Michael A. Kuhn for providing us with a unified catalog of Young Stellar Objects. Gordian Edenhofer acknowledges the support of the German Academic Scholarship Foundation in the form of a PhD scholarship ("Promotionsstipendium der Studienstiftung des Deutschen Volkes''). Catherine Zucker acknowledges that support for this work was provided by NASA through the NASA Hubble Fellowship grant #HST-HF2-51498.001 awarded by the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. Andrew K. Saydjari acknowledges support by a National Science Foundation Graduate Research Fellowship (DGE-1745303). Andrew K. Saydjari and Douglas Finkbeiner acknowledge support by NASA ADAP grant 80NSSC21K0634 "Knitting Together the Milky Way: An Integrated Model of the Galaxy's Stars, Gas, and Dust''. This work was supported by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions). A portion of this work was enabled by the FASRC Cannon cluster supported by the FAS Division of Science Research Computing Group at Harvard University. We acknowledge support by the Max-Planck Computing and Data Facility (MPCDF). This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.

9. ZGR23 in dust-free Regions

To gauge the reliability of the ZGR23 catalog, we analyze the extinction to stars in dust-free regions, c.f. Leike2019, Leike2020. In dust-free regions, we would expect the extinction to be zero within the uncertainties of the catalog. To classify a region as dust-free, we use the Planck dust emission map (Planck2013). A region is said to be dust-free if the Planck $E(B-V)$ map is below or equal to $0.0095 \mathrm{mag}$ or approximately $0.029$ in terms of $A_V$.

ZGR23 extinction in dust-free regions.

Photo}} }

ZGR23 standardized extinction in dust-free regions.

Photo}} } \caption{% The top plot shows the histogram of the ZGR23 extinctions in dust-free regions translated to $A_V$. The mean extinction in dust-free regions based on Planck is shown as a vertical green line and the cutoff value translated to $A_V$ for our definition of dust-free is shown in red. The bottom plot shows the same figure from above, but the extinctions are scaled by their accompanying uncertainties. A truncated standard normal distribution and a standard normal distribution are plotted on top. Both ordinates are logarithmic. }

The first panel of \Cref{fig:zgr23_offset} shows the histogram of ZGR23 extinction to stars with quality_flags<8 in dust-free regions translated to $A_V$. We see that the histogram of the extinction peaks at the cutoff value and coincides with the mean total extinction as measured by Planck in those regions translated to $A_V$. The density of extinction values falls of exponentially after the cutoff value. Overall, the ZGR23 extinction seems to be in good agreement with Planck2013 for dust-free regions.

The second panel of \Cref{fig:zgr23_offset} shows the extinction divided by their uncertainties for the stars from \Cref{fig:zgr23_offset}. The standardized extinctions are centered around unity, indicating that the ZGR23 extinction indeed are offset from zero by about one standard deviation in dust-free regions. This is in agreement with the previous finding that the extinctions are centered around the cutoff value instead of clustering around zero. The width around the center is comparable to a truncated standard normal distribution or a normal distribution. In total, about $1%$ of the probability mass lies outside the possible range of all ZGR23 extinction with quality_flags<8 if we assume a normal distribution for the extinctions.

Except for outliers far away from the center which can be captured by an outlier model, the ZGR23 catalog seems to be in agreement with POS measurements in dust-free regions and the spread around the cutoff value approximately follows a (truncated) normal distribution. We deem the ZGR23 catalog to be reliable for our purposes and approximate the uncertainties using a normal distribution. We accept the mismodeling of a small fraction of probability mass for a simpler model, see \Cref{sec:caveats,sec:posterior_inference}.

10. Metric Gaussian Variational Inference

The variational inference method Metric Gaussian Variational Inference (MGVI) approximates the true posterior $P(\xi,|,d)$ with a standard normal distribution in a linearly transformed space in which the posterior more closely resembles a standard normal. Let $Q_{\bar{\xi}}(y,|,d)=\mathcal{G}(y(\xi),|,y(\bar{\xi}),\mathbb{1})\left|\frac{\mathrm{d}y}{\mathrm{d}\xi}\right|$ be the approximate posterior and $y(\xi): \xi \mapsto y(\xi)$ the coordinate transformation. In this space the transformed posterior reads $P(\xi(y),|,d)\left|\frac{\mathrm{d}\xi}{\mathrm{d}y}\right|$. We denote the metric of the space in which $P(\xi(y),|,d)\left|\frac{\mathrm{d}\xi}{\mathrm{d}y}\right|$ is "more'' standard normal by $M := \frac{\mathrm{d}y}{\mathrm{d}\xi} \left(\frac{\mathrm{d}y}{\mathrm{d}\xi}\right)^\dagger$. Assuming $y(\xi): \xi \mapsto y(\xi)$ is known, the difficulty lies solely in finding the optimal $\bar{\xi}$ for $Q_{\bar{\xi}}$.

Based on the Fisher information metric and Frequentist statistics, Knollmueller2019 derive a coordinate transformation $y_{\bar{\xi}}(\xi)$ centered on $\bar{\xi}$ that is linear in $\xi$. In Frank2021, the authors find that a set of Riemannian normal coordinates $y_{\bar{\xi}}(\xi)$ centered on $\bar{\xi}$ are an improved, non-linear estimate of the coordinate transform $y(\xi) \approx y_{\bar{\xi}}(\xi)$. The improvements though come at slightly higher computational costs. We refer the reader to Frank2021, Frank2022 for further details on geoVI and its relation to MGVI, the choice of metric, and an analysis of its failure modes. For computational reasons, we use MGVI for our inference.

We start at a random initial position for $\bar{\xi}$ and draw $n_\text{samples}$ standard normal samples in the space of $y$. Next, we transform the samples to the space of $\xi$ via $y_{\bar{\xi}}$, our local, linear approximation to $y(\xi)$ at $\bar{\xi}$. We denote the samples in our parameter space by $\{\xi_1, \dots, \xi_{n_\text{sampels}}\}$. Relative to the expansion point $\bar{\xi}$ the samples read $\{\Delta\xi_1:=\xi_1-\bar{\xi}, \dots, \Delta\xi_{n_\text{samples}}:=\xi_{n_\text{sampels}}-\bar{\xi}\}$. The samples $\{\Delta\xi_1,\dots,\Delta\xi_{n_\text{samples}}\}$ around $\bar{\xi}$ provide an empirical, sampled approximation to $Q_{\bar{\xi}}$ which we denote by $\tilde{Q}_{\bar{\xi}}$. We optimize $\bar{\xi}$ of our sampled distribution $\tilde{Q}_{\bar{\xi}}$ by minimizing the variational Kullback–Leibler (KL) divergence between $\tilde{Q}_{\bar{\xi}}$ and the true distribution $P$

$$ \begin{align} \bar{\xi}' = &\mathop{\hbox{arg min}}_{\bar{\xi}} \mathrm{KL}\left(\tilde{Q}_{\bar{\xi}} ,\, P(\xi\,|\,d)\right) \\ &= \mathop{\hbox{arg min}}_{\bar{\xi}} {\left\langle \ln{\frac{\tilde{Q}_{\bar{\xi}}}{P(\xi\,|\,d)}} \right\rangle}_{\tilde{Q}_{\bar{\xi}}}\label{eq:evi_true_kl} \\ &= \mathop{\hbox{arg min}}_{\bar{\xi}} {\left\langle -\ln{P(\xi\,|\,d)} \right\rangle}_{\tilde{Q}_{\bar{\xi}}}\label{eq:evi_kl_wo_approx_dist_vol} \\ &= \mathop{\hbox{arg min}}_{\bar{\xi}} \frac{-1}{n_\text{samples}} \sum_{i=1}^{n_\text{samples}} \ln{P(\Delta\xi_i - \bar{\xi}\,|\,d)}\label{eq:evi_kl_sampled} \ . \end{align} $$

Note, we keep the relative samples $\{\Delta\xi_1:=\xi_1-\bar{\xi}, \dots, \Delta\xi_{n_\text{samples}}:=\xi_{n_\text{sampels}}-\bar{\xi}\}$ fixed during the optimization and only vary $\bar{\xi}$. Finally, we update the expansion point $\bar{\xi}$ to the new found optimum $\bar{\xi}'$.

After the minimization we draw a new set of samples, transform them via a local, linear expansion of $y$, and then minimize again. We repeat the drawing of samples and minimization until we reach a fixed point for $\bar{\xi}$. \Cref{alg:expansion_point_vi} summarizes the algorithmic steps of our variational approximation to the true posterior.

Pseudocode for our expansion point variational inference scheme using MGVI. }

11. Extinction Catalog

We release a catalog of expected extinction for all stars within the subset of the ZGR23 catalog that we use for our reconstruction (see \Cref{sec:data}). We predict the expected extinction conditional on the known parallax including parallax uncertainties. Our prediction is the best guess of our model for the extinction towards a star but is not necessarily the best guess for the extinction at the mean parallax of the star.

Our extinction predictions (see \Cref{sec:priors,sec:likelihood:response}) differ from the extinctions in the ZGR23 catalog by coupling the individual stars via the 3D dust extinction density. By virtue of every star depending on all nearby stars via the prior, our extinction predictions come in the form of joint predictions for all stars. In regions where the 3D dust extinction density is well constraint, the joint predictions to first order factorize into predictions for individual stars, and we can compute expected extinctions for individual stars and their uncertainties.

Our catalog of extinction includes the innermost 69pc from the beginning of our grid and the outer 550pc beyond 1.25kpc that we cut away in the 3D map. We advise caution when analyzing the stars of our catalog within those regions as they might carry additional biases. See \Cref{sec:caveats,sec:posterior_inference} for details on why these regions where removed from the final map.

Our extinction prediction versus the ZGR23 extinction.

Photo}} }

Our inferred uncertainties versus the ZGR23 extinction uncertainties.

Photo}} } \caption{\label{fig:zgr23_versus_ours}% The top panel shows our mean posterior extinctions versus the ZGR23 extinctions to stars as 2D histogram. The $16^\text{th}$, $50^\text{th}$, and $84^\text{th}$ quantiles of the ZGR23 extinctions for each bin of our mean extinction are shown as blue lines. The respective quantiles of our predictions in bins of the ZGR23 extinctions are shown as orange lines. The bottom panel shows the same comparison but for our posterior mean predictions for the ZGR23 measurement uncertainties $\sqrt{\left[n_\sigma(\xi) \cdot \sigma_A\right]^2+\sigma_a^2}$ versus the ZGR23 uncertainties. Note, the predictions for the ZGR23 measurement uncertainties are not the uncertainties of our extinction predictions. See \Cref{sec:likelihood} and in specific \Cref{eq:total_likelihood} for further details on the quantities shown here. The bisectors are shown in red. The colorbars are logarithmic. }

The top panel of \Cref{fig:zgr23_versus_ours} compares the ZGR23 extinctions to our mean extinction predictions to stars. Overall, our mean extinctions are in very good agreement with the extinctions in the ZGR23 catalog for the vast majority of stars. However, below 50 mmag and above 4 mag our extinction predictions deviate from the predictions in ZGR23. At any given ZGR23 (respectively our) extinction bin, we would expect half of our (respectively the ZGR23) extinctions to be below and the other half to be above the bisector. At 50 mmag, 34% more stars than expected have higher extinctions than the corresponding extinctions in ZGR23. The difference further widens for lower ZGR23 extinctions. At 4 mag, 34% more stars than expected have lower extinctions than the corresponding extinctions in ZGR23.

The bottom panel of \Cref{fig:zgr23_versus_ours} shows our and the ZGR23 extinction uncertainties. Note, our extinction uncertainties are predictions for the measured uncertainties of the ZGR23 catalog $\left[n_\sigma(\xi) \cdot \sigma_A\right]^2+\sigma_a^2(\rho(\xi))$ and not the uncertainties of our extinction predictions $\mathrm{std}(\bar{a})$, see \Cref{sec:likelihood}. Overall, both uncertainties agree well for the vast majority of stars. At low extinctions uncertainties our uncertainties only marginally inflate the ZGR23 uncertainties. However, at high extinction uncertainties, our predictions cover a larger range, and we find that the ZGR23 significantly underpredicts our extinction uncertainties of stars.

Photo}} \caption{% The mean standardized extinctions $(A - \bar{a}) / \sqrt{(n_\sigma \cdot \sigma_A)^2 + \sigma^2_a}$ (see \Cref{sec:likelihood} and in specific \Cref{eq:total_likelihood}) within the range of $-5$ to $5$. }

\Cref{fig:standardized_extinction_prediction} summarizes the extinctions and the extinction uncertainties of both ZGR23 and our predictions into a single histogram of the mean standardized extinction. The mean standardized residuals follow a standard Gaussian (c.f. \Cref{sec:likelihood}). However, the mean standardized residuals have two slight overdensities at each tail of the Gaussian indicating some outliers are not yet fully captured by our inference of the uncertainties.

Photo}} \caption{\label{fig:actual_zgr23_versus_ours_unc}% Similar to \Cref{fig:zgr23_versus_ours} but for the posterior standard deviation of our extinctions versus the ZGR23 uncertainties. The $16^\text{th}$, $50^\text{th}$, and $84^\text{th}$ quantiles of the ZGR23 uncertainties for each bin of our standard deviation are shown as blue lines. The respective quantiles of our standard deviation in bins of the ZGR23 uncertainties are shown as orange lines. The bisectors are shown in red. The colorbars are logarithmic. }

\Cref{fig:actual_zgr23_versus_ours_unc} shows the posterior standard deviation of our extinction predictions versus the ZGR23 uncertainties. Our model yields approximately one order in magnitude lower extinction uncertainties than the ZGR23 uncertainties for the vast majority of stars. The effect is less pronounced for low ZGR23 extinction uncertainties.

Our predictions for the extinction to stars theoretically contain more information since we allow for the cross-talk of nearby stars via the 3D distribution of dust and thus might be more accurate. However, the ZGR23 catalog might yield better results in practice because it does not discretize the 3D volume within which the stars reside. By discretizing the modeled volume we can produce contradicting data that in a continuous space is non-contradicting, e.g. by putting highly extincted stars that lie in a dust cloud into the same voxel as less extincted stars that are adjacent to the dust cloud. Overall, both predictions agree very well for stars below between 50 mmag and 4 mag. More work is needed to validate the discrepant predictions at very low and very high extinctions.

12. 2kpc Reconstruction

In \Cref{sec:posterior_inference} we describe how we iteratively increase the distance out to our maximum reconstructed distance. We do so to improve the convergence of the reconstruction. We also try naively reconstructing the full volume at once. Using all the available data is computationally prohibitive, so we limit the reconstruction to high quality data using quality_flags==0, $\sigma_A \leq 0.04$, and ${\sigma_\omega}/{\omega} < 0.33$.

We use ${1}/{(\omega-\sigma_\omega)}<3\,\hbox{kpc}$ and ${1}/{(\omega+\sigma_\omega)}>40\,\hbox{pc}$ to select the stars within a 3kpc sphere. To further speed up the inference we start the inference using at first only a sample of $10%$, then $20%$, $45%$, $67%$ and finally $100%$ of the stars. In total, we select $59,334,214$ stars. After the inference, we cut away the outermost 1kpc of the sphere of the data constrained region to avoid degradation effects due to the thinning out of stars at the edge. The overall reconstructed volume after removing the outermost HEALPix spheres extends out to 2kpc in distance.

Photo}} \caption{% Axis parallel projections of the reconstructed dust extinction in a box of dimensions $4kpc \times 4kpc \times 0.8\,\hbox{kpc}$ centered on the Sun. The colorbar is linear and saturates at the $99.9%$ quantile. }

Photo}} \caption{% Same as \Cref{fig:galactic_2kpc} but with a catalog of clusters of young stellar objects (Kuhn2023YSO) based on Kuhn2021, Winston2020, Marton2022 shown a blue dots on top of the reconstruction and their distance uncertainties shown as extended lines. }

The reconstruction is shown in \Cref{fig:galactic_2kpc} and again in \Cref{fig:galactic_2kpc_with_ysos} with a catalog of YSO clusters (Kuhn2023YSO) overlaid on top. It shows the same large scale features as the smaller reconstruction discussed in the main text. The distribution of dense dust clouds is in agreement with the positions of YSO clusters within the distance uncertainties of the YSO clusters. Compared to \Cref{fig:galactic} the reconstruction is less detailed and features more pronounced artifacts.

We use the larger reconstruction to validate the inference of the smaller one. Specifically, we use the larger reconstruction to ensure that structures aligned with or close to the radial boundaries at which we increase the distance of the main reconstruction are independent of the locations at which we increase the distance covered.

We release the larger reconstruction as an additional data product together with the main reconstruction. We advise using the main reconstruction for all regions that fall within its volume. Care should be taken when interpreting small scale features or structures at high distances in the larger reconstruction.

13. Using the Reconstruction

All data products are made publicly available at https://doi.org/10.5281/zenodo.8187943. The data products are stored in the FITS file format. The main data products are the posterior samples of the spatial 3D distribution of dust extinction discretized to HEALPix spheres at logarithmically spaced distances. For convenience, we also provide the posterior mean and standard deviation of the samples of the HEALPix spheres at logarithmically spaced distances.

We additionally interpolate the posterior mean and standard deviation to a Cartesian grid. The interpolation is carried out at a lower resolution using $2^3\,\hbox{pc}^3$ voxels to keep the filesize reasonably small. We recommend re-interpolating the map at a higher resolution for the study of individual regions within the map.

We release the interpolation script as part of the data release. Its signature reads interp2box.py [-h] [-o OUTPUT_DIRECTORY] [-b BOX] healpix_path. A box is a string of two tuples separated by two colons. The first tuple specifies the number of voxels along each axis of the box and the second tuple specifies the corners of the box in parsecs in heliocentric coordinates. To interpolate the map to a box with $1051 \times 1051 \times 351$ voxels of size $\rm |X|, |Y| \leq 2100\,\hbox{pc}$ and $\rm |Z| \leq 700\,\hbox{pc}$, use interp2box.py -b '(1051,1051,351)::((-2100,2100),(-2100,2100),(-700,700))' -- mean_and_std_healpix.fits.

In addition, we interpolate the posterior mean and standard deviation to galactic longitude, latitude and distance. The signature of the interpolation script reads interp2lbd.py [-h] [-o OUTPUT_DIRECTORY] [-b BOX] healpix_path. Its behavior is similar to interp2box.py but the box is specified in terms of galactic longitude, latitude and distance in units of degrees, degrees, and parsecs respectively.

Both scripts require the Python packages numpy (Harris2020), astropy (Astropy2013, Astropy2018, Astropy2022), and healpy (Gorski2005, Zonca2019). Depending on the number of output voxels, the interpolation can be very memory intensive and computationally expensive.

Literature

  1. [Alves2020] Alves, João and Zucker, Catherine and Goodman, Alyssa A. and Speagle, Joshua S. and Meingast, Stefan and Robitaille, Thomas and Finkbeiner, Douglas P. and Schlafly, Edward F. and Green, Gregory M.: A Galactic-scale gas wave in the solar neighbourhood, Feb-2020, Nature, Vol. 578, Nr. 7794, pp. 237-239, https://doi.org/10.1038/s41586-019-1874-z
  2. [Anders2019] Anders, F. and Khalatyan, A. and Chiappini, C. and Queiroz, A.B. and Santiago, B.X. and Jordi, C. and Girardi, L. and Brown, A.G.A. and Matijevič, G. and Monari, G. and Cantat-Gaudin, T. and Weiler, M. and Khan, S. and Miglio, A. and Carrillo, I. and Romero-Gómez, M. and Minchev, I. and de Jong, R.S. and Antoja, T. and Ramos, P. and Steinmetz, M. and Enke, H.: Photo-astrometric distances, extinctions, and astrophysical parameters for Gaia DR2 stars brighter than G = 18, Aug-2019, Astronomy & Astrophysics, Vol. 628, pp. A94, https://doi.org/10.1051/0004-6361/201935765
  3. [Anders2022] Anders, F. and Khalatyan, A. and Queiroz, A.B.A. and Chiappini, C. and Ardèvol, J. and Casamiquela, L. and Figueras, F. and Jiménez-Arranz, Ó. and Jordi, C. and Monguió, M. and Romero-Gómez, M. and Altamirano, D. and Antoja, T. and Assaad, R. and Cantat-Gaudin, T. and Castro-Ginard, A. and Enke, H. and Girardi, L. and Guiglion, G. and Khan, S. and Luri, X. and Miglio, A. and Minchev, I. and Ramos, P. and Santiago, B.X. and Steinmetz, M.: Photo-astrometric distances, extinctions, and astrophysical parameters for Gaia EDR3 stars brighter than G = 18.5, Feb-2022, Astronomy & Astrophysics, Vol. 658, pp. A91, https://doi.org/10.1051/0004-6361/202142369
  4. [Arras2019] Arras, Philipp and Baltac, Mihai and Ensslin, Torsten A. and Frank, Philipp and Hutschenreuter, Sebastian and Knollmueller, Jakob and Leike, Reimar and Newrzella, Max-Niklas and Platz, Lukas and Reinecke, Martin and Stadler, Julia: NIFTy5: Numerical Information Field Theory v5, 03-2019
  5. [Arras2022] Arras, Philipp and Frank, Philipp and Haim, Philipp and Knollmüller, Jakob and Leike, Reimar and Reinecke, Martin and Enßlin, Torsten: Variable structures in M87* from space, time and frequency resolved interferometry, Jan-2022, Nature Astronomy, Vol. 6, pp. 259-269, https://doi.org/10.1038/s41550-021-01548-0
  6. [Astropy2013] Astropy Collaboration and Robitaille, Thomas P. and Tollerud, Erik J. and Greenfield, Perry and Droettboom, Michael and Bray, Erik and Aldcroft, Tom and Davis, Matt and Ginsburg, Adam and Price-Whelan, Adrian M. and Kerzendorf, Wolfgang E. and Conley, Alexander and Crighton, Neil and Barbary, Kyle and Muna, Demitri and Ferguson, Henry and Grollier, Frédéric and Parikh, Madhura M. and Nair, Prasanth H. and Unther, Hans M. and Deil, Christoph and Woillez, Julien and Conseil, Simon and Kramer, Roban and Turner, James E.H. and Singer, Leo and Fox, Ryan and Weaver, Benjamin A. and Zabalza, Victor and Edwards, Zachary I. and Azalee Bostroem, K. and Burke, D.J. and Casey, Andrew R. and Crawford, Steven M. and Dencheva, Nadia and Ely, Justin and Jenness, Tim and Labrie, Kathleen and Lim, Pey Lian and Pierfederici, Francesco and Pontzen, Andrew and Ptak, Andy and Refsdal, Brian and Servillat, Mathieu and Streicher, Ole: Astropy: A community Python package for astronomy, Oct-2013, Astronomy & Astrophysics, Vol. 558, pp. A33, https://doi.org/10.1051/0004-6361/201322068
  7. [Astropy2018] Astropy Collaboration and Price-Whelan, A.M. and Sipőcz, B.M. and Günther, H.M. and Lim, P.L. and Crawford, S.M. and Conseil, S. and Shupe, D.L. and Craig, M.W. and Dencheva, N. and Ginsburg, A. and VanderPlas, J.T. and Bradley, L.D. and Pérez-Suárez, D. and de Val-Borro, M. and Aldcroft, T.L. and Cruz, K.L. and Robitaille, T.P. and Tollerud, E.J. and Ardelean, C. and Babej, T. and Bach, Y.P. and Bachetti, M. and Bakanov, A.V. and Bamford, S.P. and Barentsen, G. and Barmby, P. and Baumbach, A. and Berry, K.L. and Biscani, F. and Boquien, M. and Bostroem, K.A. and Bouma, L.G. and Brammer, G.B. and Bray, E.M. and Breytenbach, H. and Buddelmeijer, H. and Burke, D.J. and Calderone, G. and Cano Rodr'\iguez, J.L. and Cara, M. and Cardoso, J.V.M. and Cheedella, S. and Copin, Y. and Corrales, L. and Crichton, D. and D'Avella, D. and Deil, C. and Depagne, É. and Dietrich, J.P. and Donath, A. and Droettboom, M. and Earl, N. and Erben, T. and Fabbro, S. and Ferreira, L.A. and Finethy, T. and Fox, R.T. and Garrison, L.H. and Gibbons, S.L.J. and Goldstein, D.A. and Gommers, R. and Greco, J.P. and Greenfield, P. and Groener, A.M. and Grollier, F. and Hagen, A. and Hirst, P. and Homeier, D. and Horton, A.J. and Hosseinzadeh, G. and Hu, L. and Hunkeler, J.S. and Ivezi'c, Ž. and Jain, A. and Jenness, T. and Kanarek, G. and Kendrew, S. and Kern, N.S. and Kerzendorf, W.E. and Khvalko, A. and King, J. and Kirkby, D. and Kulkarni, A.M. and Kumar, A. and Lee, A. and Lenz, D. and Littlefair, S.P. and Ma, Z. and Macleod, D.M. and Mastropietro, M. and McCully, C. and Montagnac, S. and Morris, B.M. and Mueller, M. and Mumford, S.J. and Muna, D. and Murphy, N.A. and Nelson, S. and Nguyen, G.H. and Ninan, J.P. and Nöthe, M. and Ogaz, S. and Oh, S. and Parejko, J.K. and Parley, N. and Pascual, S. and Patil, R. and Patil, A.A. and Plunkett, A.L. and Prochaska, J.X. and Rastogi, T. and Reddy Janga, V. and Sabater, J. and Sakurikar, P. and Seifert, M. and Sherbert, L.E. and Sherwood-Taylor, H. and Shih, A.Y. and Sick, J. and Silbiger, M.T. and Singanamalla, S. and Singer, L.P. and Sladen, P.H. and Sooley, K.A. and Sornarajah, S. and Streicher, O. and Teuben, P. and Thomas, S.W. and Tremblay, G.R. and Turner, J.E.H. and Terrón, V. and van Kerkwijk, M.H. and de la Vega, A. and Watkins, L.L. and Weaver, B.A. and Whitmore, J.B. and Woillez, J. and Zabalza, V. and Astropy Contributors: The Astropy Project: Building an Open-science Project and Status of the v2.0 Core Package, Sep-2018, Astronomical Journal, Vol. 156, Nr. 3, pp. 123, https://doi.org/10.3847/1538-3881/aabc4f
  8. [Astropy2022] Astropy Collaboration and Price-Whelan, Adrian M. and Lim, Pey Lian and Earl, Nicholas and Starkman, Nathaniel and Bradley, Larry and Shupe, David L. and Patil, Aarya A. and Corrales, Lia and Brasseur, C.E. and Nöthe, Maximilian and Donath, Axel and Tollerud, Erik and Morris, Brett M. and Ginsburg, Adam and Vaher, Eero and Weaver, Benjamin A. and Tocknell, James and Jamieson, William and van Kerkwijk, Marten H. and Robitaille, Thomas P. and Merry, Bruce and Bachetti, Matteo and Günther, H. Moritz and Aldcroft, Thomas L. and Alvarado-Montes, Jaime A. and Archibald, Anne M. and Bódi, Attila and Bapat, Shreyas and Barentsen, Geert and Bazán, Juanjo and Biswas, Manish and Boquien, Médéric and Burke, D.J. and Cara, Daria and Cara, Mihai and Conroy, Kyle E. and Conseil, Simon and Craig, Matthew W. and Cross, Robert M. and Cruz, Kelle L. and D'Eugenio, Francesco and Dencheva, Nadia and Devillepoix, Hadrien A.R. and Dietrich, Jörg P. and Eigenbrot, Arthur Davis and Erben, Thomas and Ferreira, Leonardo and Foreman-Mackey, Daniel and Fox, Ryan and Freij, Nabil and Garg, Suyog and Geda, Robel and Glattly, Lauren and Gondhalekar, Yash and Gordon, Karl D. and Grant, David and Greenfield, Perry and Groener, Austen M. and Guest, Steve and Gurovich, Sebastian and Handberg, Rasmus and Hart, Akeem and Hatfield-Dodds, Zac and Homeier, Derek and Hosseinzadeh, Griffin and Jenness, Tim and Jones, Craig K. and Joseph, Prajwel and Kalmbach, J. Bryce and Karamehmetoglu, Emir and Ka\luszy'nski, Miko\laj and Kelley, Michael S.P. and Kern, Nicholas and Kerzendorf, Wolfgang E. and Koch, Eric W. and Kulumani, Shankar and Lee, Antony and Ly, Chun and Ma, Zhiyuan and MacBride, Conor and Maljaars, Jakob M. and Muna, Demitri and Murphy, N.A. and Norman, Henrik and O'Steen, Richard and Oman, Kyle A. and Pacifici, Camilla and Pascual, Sergio and Pascual-Granado, J. and Patil, Rohit R. and Perren, Gabriel I. and Pickering, Timothy E. and Rastogi, Tanuj and Roulston, Benjamin R. and Ryan, Daniel F. and Rykoff, Eli S. and Sabater, Jose and Sakurikar, Parikshit and Salgado, Jesús and Sanghi, Aniket and Saunders, Nicholas and Savchenko, Volodymyr and Schwardt, Ludwig and Seifert-Eckert, Michael and Shih, Albert Y. and Jain, Anany Shrey and Shukla, Gyanendra and Sick, Jonathan and Simpson, Chris and Singanamalla, Sudheesh and Singer, Leo P. and Singhal, Jaladh and Sinha, Manodeep and Sipőcz, Brigitta M. and Spitler, Lee R. and Stansby, David and Streicher, Ole and Šumak, Jani and Swinbank, John D. and Taranu, Dan S. and Tewary, Nikita and Tremblay, Grant R. and de Val-Borro, Miguel and Van Kooten, Samuel J. and Vasovi'c, Zlatan and Verma, Shresth and de Miranda Cardoso, José Vin'\icius and Williams, Peter K.G. and Wilson, Tom J. and Winkel, Benjamin and Wood-Vasey, W.M. and Xue, Rui and Yoachim, Peter and Zhang, Chen and Zonca, Andrea and Astropy Project Contributors: The Astropy Project: Sustaining and Growing a Community-oriented Open-source Project and the Latest Major Release (v5.0) of the Core Package, Aug-2022, The Astrophysical Journal, Vol. 935, Nr. 2, pp. 167, https://doi.org/10.3847/1538-4357/ac7c74
  9. [CantatGaudin2022] Tristan Cantat-Gaudin and Morgan Fouesneau and Hans-Walter Rix and Anthony G. A. Brown and Alfred Castro-Ginard and Zuzanna Kostrzewa-Rutkowska and Ronald Drimmel and David W. Hogg and Andrew R. Casey and Shourya Khanna and Semyeong Oh and Adrian M. Price-Whelan and Vasily Belokurov and Andrew K. Saydjari and G. Green: An empirical model of the $\less$i$\greater$Gaia$\less$/i$\greater$ DR3 selection function, Jan-2023, Astronomy & Astrophysics, Vol. 669, pp. A55, https://doi.org/10.1051/0004-6361/202244784
  10. [Capitanio2017] Capitanio, L. and Lallement, R. and Vergely, J.L. and Elyajouri, M. and Monreal-Ibero, A.: Three-dimensional mapping of the local interstellar medium with composite data, Oct-2017, Astronomy & Astrophysics, Vol. 606, pp. A65, https://doi.org/10.1051/0004-6361/201730831
  11. [Carasco2021] Carrasco, J.M. and Weiler, M. and Jordi, C. and Fabricius, C. and De Angeli, F. and Evans, D.W. and van Leeuwen, F. and Riello, M. and Montegriffo, P.: Internal calibration of Gaia BP/RP low-resolution spectra, Aug-2021, Astronomy & Astrophysics, Vol. 652, pp. A86, https://doi.org/10.1051/0004-6361/202141249
  12. [chambers2019] K. C. Chambers and E. A. Magnier and N. Metcalfe and H. A. Flewelling and M. E. Huber and C. Z. Waters and L. Denneau and P. W. Draper and D. Farrow and D. P. Finkbeiner and C. Holmberg and J. Koppenhoefer and P. A. Price and A. Rest and R. P. Saglia and E. F. Schlafly and S. J. Smartt and W. Sweeney and R. J. Wainscoat and W. S. Burgett and S. Chastel and T. Grav and J. N. Heasley and K. W. Hodapp and R. Jedicke and N. Kaiser and R. -P. Kudritzki and G. A. Luppino and R. H. Lupton and D. G. Monet and J. S. Morgan and P. M. Onaka and B. Shiao and C. W. Stubbs and J. L. Tonry and R. White and E. Bañados and E. F. Bell and R. Bender and E. J. Bernard and M. Boegner and F. Boffi and M. T. Botticella and A. Calamida and S. Casertano and W. -P. Chen and X. Chen and S. Cole and N. Deacon and C. Frenk and A. Fitzsimmons and S. Gezari and V. Gibbs and C. Goessl and T. Goggia and R. Gourgue and B. Goldman and P. Grant and E. K. Grebel and N. C. Hambly and G. Hasinger and A. F. Heavens and T. M. Heckman and R. Henderson and T. Henning and M. Holman and U. Hopp and W. -H. Ip and S. Isani and M. Jackson and C. D. Keyes and A. M. Koekemoer and R. Kotak and D. Le and D. Liska and K. S. Long and J. R. Lucey and M. Liu and N. F. Martin and G. Masci and B. McLean and E. Mindel and P. Misra and E. Morganson and D. N. A. Murphy and A. Obaika and G. Narayan and M. A. Nieto-Santisteban and P. Norberg and J. A. Peacock and E. A. Pier and M. Postman and N. Primak and C. Rae and A. Rai and A. Riess and A. Riffeser and H. W. Rix and S. Röser and R. Russel and L. Rutz and E. Schilbach and A. S. B. Schultz and D. Scolnic and L. Strolger and A. Szalay and S. Seitz and E. Small and K. W. Smith and D. R. Soderblom and P. Taylor and R. Thomson and A. N. Taylor and A. R. Thakar and J. Thiel and D. Thilker and D. Unger and Y. Urata and J. Valenti and J. Wagner and T. Walder and F. Walter and S. P. Watters and S. Werner and W. M. Wood-Vasey and R. Wyse: The Pan-STARRS1 Surveys, 2019
  13. [Chen2018] Li, Linlin and Shen, Shiyin and Hou, Jinliang and Yuan, Haibo and Xiang, Maosheng and Chen, Bingqiu and Huang, Yang and Liu, Xiaowei: Three-dimensional Structure of the Milky Way Dust: Modeling of LAMOST Data, May-2018, The Astrophysical Journal, Vol. 858, Nr. 2, pp. 75, https://doi.org/10.3847/1538-4357/aabaef
  14. [Chen2019] Chen, B. -Q. and Huang, Y. and Yuan, H. -B. and Wang, C. and Fan, D. -W. and Xiang, M. -S. and Zhang, H. -W. and Tian, Z. -J. and Liu, X. -W.: Three-dimensional interstellar dust reddening maps of the Galactic plane, Mar-2019, Monthly Notices of the RAS, Vol. 483, Nr. 4, pp. 4277-4289, https://doi.org/10.1093/mnras/sty3341
  15. [DeAngeli2022] De Angeli, F. and Weiler, M. and Montegriffo, P. and Evans, D.W. and Riello, M. and Andrae, R. and Carrasco, J.M. and Busso, G. and Burgess, P.W. and Cacciari, C. and Davidson, M. and Harrison, D.L. and Hodgkin, S.T. and Jordi, C. and Osborne, P.J. and Pancino, E. and Altavilla, G. and Barstow, M.A. and Bailer-Jones, C.A.L. and Bellazzini, M. and Brown, A.G.A. and Castellani, M. and Cowell, S. and Delchambre, L. and De Luise, F. and Diener, C. and Fabricius, C. and Fouesneau, M. and Fremat, Y. and Gilmore, G. and Giuffrida, G. and Hambly, N.C. and Hidalgo, S. and Holland, G. and Kostrzewa-Rutkowska, Z. and van Leeuwen, F. and Lobel, A. and Marinoni, S. and Miller, N. and Pagani, C. and Palaversa, L. and Piersimoni, A.M. and Pulone, L. and Ragaini, S. and Rainer, M. and Richards, P.J. and Rixon, G.T. and Ruz-Mieres, D. and Sanna, N. and Sarro, L.M. and Rowell, N. and Sordo, R. and Walton, N.A. and Yoldas, A.: Gaia Data Release 3: Processing and validation of BP/RP low-resolution spectral data, Jun-2022, arXiv e-prints, pp. arXiv:2206.06143, https://doi.org/10.48550/arXiv.2206.06143
  16. [Delchambre2022] Delchambre, L. and Bailer-Jones, C.A.L. and Bellas-Velidis, I. and Drimmel, R. and Garabato, D. and Carballo, R. and Hatzidimitriou, D. and Marshall, D.J. and Andrae, R. and Dafonte, C. and Livanou, E. and Fouesneau, M. and Licata, E.L. and Lindstrom, H.E.P. and Manteiga, M. and Robin, C. and Silvelo, A. and Abreu Aramburu, A. and Alvarez, M.A. and Bakker, J. and Bijaoui, A. and Brouillet, N. and Brugaletta, E. and Burlacu, A. and Casamiquela, L. and Chaoul, L. and Chiavassa, A. and Contursi, G. and Cooper, W.J. and Creevey, O.L. and Dapergolas, A. and de Laverny, P. and Demouchy, C. and Dharmawardena, T.E. and Edvardsson, B. and Fremat, Y. and Garcia-Lario, P. and Garcia-Torres, M. and Gavel, A. and Gomez, A. and Gonzalez-Santamaria, I. and Heiter, U. and Jean-Antoine Piccolo, A. and Kontizas, M. and Kordopatis, G. and Korn, A.J. and Lanzafame, A.C. and Lebreton, Y. and Lobel, A. and Lorca, A. and Magdaleno Romeo, A. and Marocco, F. and Mary, N. and Nicolas, C. and Ordenovic, C. and Pailler, F. and Palicio, P.A. and Pallas-Quintela, L. and Panem, C. and Pichon, B. and Poggio, E. and Recio-Blanco, A. and Riclet, F. and Rybizki, J. and Santovena, R. and Sarro, L.M. and Schultheis, M.S. and Segol, M. and Slezak, I. and Smart, R.L. and Sordo, R. and Soubiran, C. and Suveges, M. and Thevenin, F. and Torralba Elipe, G. and Ulla, A. and Utrilla, E. and Vallenari, A. and van Dillen, E. and Zhao, H. and Zorec, J.: Gaia DR3: Apsis III -- Non-stellar content and source classification, Jun-2022, arXiv e-prints, pp. arXiv:2206.06710, https://doi.org/10.48550/arXiv.2206.06710
  17. [Dharmawardena2022] Dharmawardena, T.E. and Bailer-Jones, C.A.L. and Fouesneau, M. and Foreman-Mackey, D.: Three-dimensional dust density structure of the Orion, Cygnus X, Taurus, and Perseus star-forming regions, Feb-2022, Astronomy & Astrophysics, Vol. 658, pp. A166, https://doi.org/10.1051/0004-6361/202141298
  18. [Draine2011] Draine, Bruce T.: Physics of the Interstellar and Intergalactic Medium, 2011
  19. [Edenhofer2022] Edenhofer, Gordian and Leike, Reimar H. and Frank, Philipp and Enßlin, Torsten A.: Sparse Kernel Gaussian Processes through Iterative Charted Refinement (ICR), 2022, https://doi.org/10.48550/ARXIV.2206.10634
  20. [Edenhofer2023NIFTyRE] Edenhofer, Gordian and Frank, Philipp and Leike, Reimar H. and Roth, Jakob and Guerdi, Massin and Enßlin, Torsten A.: Re-Envisioning Numerical Information Field Theory (NIFTy): An Inference Library for Gaussian Processes and Variational Inference, 2023
  21. [Frank2021] Philipp Frank and Reimar Leike and Torsten A. Enßlin: Geometric Variational Inference, Jul-2021, Entropy, Vol. 23, Nr. 7, pp. 853, https://doi.org/10.3390/e23070853
  22. [Frank2022] Frank, Philipp: Geometric Variational Inference and Its Application to Bayesian Imaging, 2022, Physical Sciences Forum, Vol. 5, Nr. 1, https://doi.org/10.3390/psf2022005006
  23. [GaiaCollaboration2022] Gaia Collaboration and Vallenari, A. and Brown, A.G.A. and Prusti, T. and de Bruijne, J.H.J. and Arenou, F. and Babusiaux, C. and Biermann, M. and Creevey, O.L. and Ducourant, C. and Evans, D.W. and Eyer, L. and Guerra, R. and Hutton, A. and Jordi, C. and Klioner, S.A. and Lammers, U.L. and Lindegren, L. and Luri, X. and Mignard, F. and Panem, C. and Pourbaix, D. and Randich, S. and Sartoretti, P. and Soubiran, C. and Tanga, P. and Walton, N.A. and Bailer-Jones, C.A.L. and Bastian, U. and Drimmel, R. and Jansen, F. and Katz, D. and Lattanzi, M.G. and van Leeuwen, F. and Bakker, J. and Cacciari, C. and Castañeda, J. and De Angeli, F. and Fabricius, C. and Fouesneau, M. and Frémat, Y. and Galluccio, L. and Guerrier, A. and Heiter, U. and Masana, E. and Messineo, R. and Mowlavi, N. and Nicolas, C. and Nienartowicz, K. and Pailler, F. and Panuzzo, P. and Riclet, F. and Roux, W. and Seabroke, G.M. and Sordo\orcit, R. and Thévenin, F. and Gracia-Abril, G. and Portell, J. and Teyssier, D. and Altmann, M. and Andrae, R. and Audard, M. and Bellas-Velidis, I. and Benson, K. and Berthier, J. and Blomme, R. and Burgess, P.W. and Busonero, D. and Busso, G. and Cánovas, H. and Carry, B. and Cellino, A. and Cheek, N. and Clementini, G. and Damerdji, Y. and Davidson, M. and de Teodoro, P. and Nuñez Campos, M. and Delchambre, L. and Dell'Oro, A. and Esquej, P. and Fernández-Hernández, J. and Fraile, E. and Garabato, D. and Garc'\ia-Lario, P. and Gosset, E. and Haigron, R. and Halbwachs, J. -L. and Hambly, N.C. and Harrison, D.L. and Hernández, J. and Hestroffer, D. and Hodgkin, S.T. and Holl, B. and Janßen, K. and Jevardat de Fombelle, G. and Jordan, S. and Krone-Martins, A. and Lanzafame, A.C. and Löffler, W. and Marchal, O. and Marrese, P.M. and Moitinho, A. and Muinonen, K. and Osborne, P. and Pancino, E. and Pauwels, T. and Recio-Blanco, A. and Reylé, C. and Riello, M. and Rimoldini, L. and Roegiers, T. and Rybizki, J. and Sarro, L.M. and Siopis, C. and Smith, M. and Sozzetti, A. and Utrilla, E. and van Leeuwen, M. and Abbas, U. and Ábrahám, P. and Abreu Aramburu, A. and Aerts, C. and Aguado, J.J. and Ajaj, M. and Aldea-Montero, F. and Altavilla, G. and Álvarez, M.A. and Alves, J. and Anders, F. and Anderson, R.I. and Anglada Varela, E. and Antoja, T. and Baines, D. and Baker, S.G. and Balaguer-Núñez, L. and Balbinot, E. and Balog, Z. and Barache, C. and Barbato, D. and Barros, M. and Barstow, M.A. and Bartolomé, S. and Bassilana, J. -L. and Bauchet, N. and Becciani, U. and Bellazzini, M. and Berihuete, A. and Bernet, M. and Bertone, S. and Bianchi, L. and Binnenfeld, A. and Blanco-Cuaresma, S. and Blazere, A. and Boch, T. and Bombrun, A. and Bossini, D. and Bouquillon, S. and Bragaglia, A. and Bramante, L. and Breedt, E. and Bressan, A. and Brouillet, N. and Brugaletta, E. and Bucciarelli, B. and Burlacu, A. and Butkevich, A.G. and Buzzi, R. and Caffau, E. and Cancelliere, R. and Cantat-Gaudin, T. and Carballo, R. and Carlucci, T. and Carnerero, M.I. and Carrasco, J.M. and Casamiquela, L. and Castellani, M. and Castro-Ginard, A. and Chaoul, L. and Charlot, P. and Chemin, L. and Chiaramida, V. and Chiavassa, A. and Chornay, N. and Comoretto, G. and Contursi, G. and Cooper, W.J. and Cornez, T. and Cowell, S. and Crifo, F. and Cropper, M. and Crosta, M. and Crowley, C. and Dafonte, C. and Dapergolas, A. and David, M. and David, P. and de Laverny, P. and De Luise, F. and De March, R. and De Ridder, J. and de Souza, R. and de Torres, A. and del Peloso, E.F. and del Pozo, E. and Delbo, M. and Delgado, A. and Delisle, J. -B. and Demouchy, C. and Dharmawardena, T.E. and Di Matteo, P. and Diakite, S. and Diener, C. and Distefano, E. and Dolding, C. and Edvardsson, B. and Enke, H. and Fabre, C. and Fabrizio, M. and Faigler, S. and Fedorets, G. and Fernique, P. and Fienga, A. and Figueras, F. and Fournier, Y. and Fouron, C. and Fragkoudi, F. and Gai, M. and Garcia-Gutierrez, A. and Garcia-Reinaldos, M. and Garc'\ia-Torres, M. and Garofalo, A. and Gavel, A. and Gavras, P. and Gerlach, E. and Geyer, R. and Giacobbe, P. and Gilmore, G. and Girona, S. and Giuffrida, G. and Gomel, R. and Gomez, A. and González-Núñez, J. and González-Santamar'\ia, I. and González-Vidal, J.J. and Granvik, M. and Guillout, P. and Guiraud, J. and Gutiérrez-Sánchez, R. and Guy, L.P. and Hatzidimitriou, D. and Hauser, M. and Haywood, M. and Helmer, A. and Helmi, A. and Sarmiento, M.H. and Hidalgo, S.L. and Hilger, T. and H\ladczuk, N. and Hobbs, D. and Holland, G. and Huckle, H.E. and Jardine, K. and Jasniewicz, G. and Jean-Antoine Piccolo, A. and Jiménez-Arranz, Ó. and Jorissen, A. and Juaristi Campillo, J. and Julbe, F. and Karbevska, L. and Kervella, P. and Khanna, S. and Kontizas, M. and Kordopatis, G. and Korn, A.J. and Kóspál, Á and Kostrzewa-Rutkowska, Z. and Kruszy'nska, K. and Kun, M. and Laizeau, P. and Lambert, S. and Lanza, A.F. and Lasne, Y. and Le Campion, J. -F. and Lebreton, Y. and Lebzelter, T. and Leccia, S. and Leclerc, N. and Lecoeur-Taibi, I. and Liao, S. and Licata, E.L. and Lindstr\om, H.E.P. and Lister, T.A. and Livanou, E. and Lobel, A. and Lorca, A. and Loup, C. and Madrero Pardo, P. and Magdaleno Romeo, A. and Managau, S. and Mann, R.G. and Manteiga, M. and Marchant, J.M. and Marconi, M. and Marcos, J. and Marcos Santos, M.M.S. and Mar'\in Pina, D. and Marinoni, S. and Marocco, F. and Marshall, D.J. and Polo, L. Martin and Mart'\in-Fleitas, J.M. and Marton, G. and Mary, N. and Masip, A. and Massari, D. and Mastrobuono-Battisti, A. and Mazeh, T. and McMillan, P.J. and Messina, S. and Michalik, D. and Millar, N.R. and Mints, A. and Molina, D. and Molinaro, R. and Molnár, L. and Monari, G. and Monguió, M. and Montegriffo, P. and Montero, A. and Mor, R. and Mora, A. and Morbidelli, R. and Morel, T. and Morris, D. and Muraveva, T. and Murphy, C.P. and Musella, I. and Nagy, Z. and Noval, L. and Ocaña, F. and Ogden, A. and Ordenovic, C. and Osinde, J.O. and Pagani, C. and Pagano, I. and Palaversa, L. and Palicio, P.A. and Pallas-Quintela, L. and Panahi, A. and Payne-Wardenaar, S. and Peñalosa Esteller, X. and Penttilä, A. and Pichon, B. and Piersimoni, A.M. and Pineau, F. -X. and Plachy, E. and Plum, G. and Poggio, E. and Prša, A. and Pulone, L. and Racero, E. and Ragaini, S. and Rainer, M. and Raiteri, C.M. and Rambaux, N. and Ramos, P. and Ramos-Lerate, M. and Re Fiorentin, P. and Regibo, S. and Richards, P.J. and Rios Diaz, C. and Ripepi, V. and Riva, A. and Rix, H. -W. and Rixon, G. and Robichon, N. and Robin, A.C. and Robin, C. and Roelens, M. and Rogues, H.R.O. and Rohrbasser, L. and Romero-Gómez, M. and Rowell, N. and Royer, F. and Ruz Mieres, D. and Rybicki, K.A. and Sadowski, G. and Sáez Núñez, A. and Sagristà Sellés, A. and Sahlmann, J. and Salguero, E. and Samaras, N. and Sanchez Gimenez, V. and Sanna, N. and Santoveña, R. and Sarasso, M. and Schultheis, M. and Sciacca, E. and Segol, M. and Segovia, J.C. and Ségransan, D. and Semeux, D. and Shahaf, S. and Siddiqui, H.I. and Siebert, A. and Siltala, L. and Silvelo, A. and Slezak, E. and Slezak, I. and Smart, R.L. and Snaith, O.N. and Solano, E. and Solitro, F. and Souami, D. and Souchay, J. and Spagna, A. and Spina, L. and Spoto, F. and Steele, I.A. and Steidelmüller, H. and Stephenson, C.A. and Süveges, M. and Surdej, J. and Szabados, L. and Szegedi-Elek, E. and Taris, F. and Taylo, M.B. and Teixeira, R. and Tolomei, L. and Tonello, N. and Torra, F. and Torra, J. and Torralba Elipe, G. and Trabucchi, M. and Tsounis, A.T. and Turon, C. and Ulla, A. and Unger, N. and Vaillant, M.V. and van Dillen, E. and van Reeven, W. and Vanel, O. and Vecchiato, A. and Viala, Y. and Vicente, D. and Voutsinas, S. and Weiler, M. and Wevers, T. and Wyrzykowski, L. and Yoldas, A. and Yvard, P. and Zhao, H. and Zorec, J. and Zucker, S. and Zwitter, T.: Gaia Data Release 3: Summary of the content and survey properties, Jul-2022, arXiv e-prints, pp. arXiv:2208.00211, https://doi.org/10.48550/arXiv.2208.00211
  24. [Gorski2005] Górski, K.M. and Hivon, E. and Banday, A.J. and : HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere, Apr-2005, The Astrophysical Journal, Vol. 622, pp. 759-771, https://doi.org/10.1086/427976
  25. [Green2017] Green, Gregory M. and Schlafly, Edward F. and Finkbeiner, Douglas and Rix, Hans-Walter and Martin, Nicolas and Burgett, William and Draper, Peter W. and Flewelling, Heather and Hodapp, Klaus and Kaiser, Nicholas and Kudritzki, Rolf-Peter and Magnier, Eugene A. and Metcalfe, Nigel and Tonry, John L. and Wainscoat, Richard and Waters, Christopher: Galactic reddening in 3D from stellar photometry - an improved map, Jul-2018, Monthly Notices of the RAS, Vol. 478, Nr. 1, pp. 651-666, https://doi.org/10.1093/mnras/sty1008
  26. [Green2018] Green, Gregory M.: dustmaps: A Python interface for maps of interstellar dust, Jun-2018, The Journal of Open Source Software, Vol. 3, Nr. 26, pp. 695, https://doi.org/10.21105/joss.00695
  27. [Green2018Dustmaps] Green, Gregory M.: dustmaps: A Python interface for maps of interstellar dust, Jun-2018, The Journal of Open Source Software, Vol. 3, Nr. 26, pp. 695, https://doi.org/10.21105/joss.00695
  28. [Green2019] Green, Gregory M. and Schlafly, Edward and Zucker, Catherine and Speagle, Joshua S. and Finkbeiner, Douglas: A 3D Dust Map Based on Gaia, Pan-STARRS 1, and 2MASS, Dec-2019, The Astrophysical Journal, Vol. 887, Nr. 1, pp. 93, https://doi.org/10.3847/1538-4357/ab5362
  29. [Harris2020] Charles R. Harris and K. Jarrod Millman and St'efan J.: Array programming with NumPy, Sep-2020, Nature, Vol. 585, Nr. 7825, pp. 357--362, https://doi.org/10.1038/s41586-020-2649-2
  30. [Hoffman2011] Hoffman, Matthew D. and Gelman, Andrew: The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo, Jan-2014, J. Mach. Learn. Res., Vol. 15, Nr. 1, pp. 1593–1623
  31. [Hutschenreuter2022] Hutschenreuter, S. and Anderson, C.S. and Betti, S. and Bower, G.C. and Brown, J. -A. and Brüggen, M. and Carretti, E. and Clarke, T. and Clegg, A. and Costa, A. and Croft, S. and Van Eck, C. and Gaensler, B.M. and de Gasperin, F. and Haverkorn, M. and Heald, G. and Hull, C.L.H. and Inoue, M. and Johnston-Hollitt, M. and Kaczmarek, J. and Law, C. and Ma, Y.K. and MacMahon, D. and Mao, S.A. and Riseley, C. and Roy, S. and Shanahan, R. and Shimwell, T. and Stil, J. and Sobey, C. and O'Sullivan, S.P. and Tasse, C. and Vacca, V. and Vernstrom, T. and Williams, P.K.G. and Wright, M. and Enßlin, T.A.: The Galactic Faraday rotation sky 2020, Jan-2022, Astronomy & Astrophysics, Vol. 657, pp. A43, https://doi.org/10.1051/0004-6361/202140486
  32. [Hutschenreuter2023] Hutschenreuter, Sebastian and Haverkorn, Marijke and Frank, Philipp and Raycheva, Nergis C. and Enßlin, Torsten A.: Disentangling the Faraday rotation sky, Apr-2023, arXiv e-prints, pp. arXiv:2304.12350, https://doi.org/10.48550/arXiv.2304.12350
  33. [Jax2018] James Bradbury and Roy Frostig and Peter Hawkins and Matthew James Johnson and Chris Leary and Dougal Maclaurin and George Necula and Adam Paszke and Jake VanderPlas and Skye Wanderman-Milne and Qiao Zhang: JAX: composable transformations of Python+NumPy programs, 2018, http://github.com/google/jax
  34. [Knollmueller2019] Knollmüller, Jakob and Enßlin, Torsten A.: Metric Gaussian Variational Inference, 2019, https://doi.org/10.48550/ARXIV.1901.11033
  35. [Kuhn2021] Kuhn, Michael A. and de Souza, Rafael S. and Krone-Martins, Alberto and Castro-Ginard, Alfred and Ishida, Emille E.O. and Povich, Matthew S. and Hillenbrand, Lynne A. and COIN Collaboration: SPICY: The Spitzer/IRAC Candidate YSO Catalog for the Inner Galactic Midplane, Jun-2021, Astrophysical Journal, Supplement, Vol. 254, Nr. 2, pp. 33, https://doi.org/10.3847/1538-4365/abe465
  36. [Kuhn2023YSO] Kuhn, Mike: personal communication, 2023-05-1
  37. [Lallement2018] Lallement, R. and Capitanio, L. and Ruiz-Dern, L. and Danielski, C. and Babusiaux, C. and Vergely, L. and Elyajouri, M. and Arenou, F. and Leclerc, N.: Three-dimensional maps of interstellar dust in the Local Arm: using Gaia, 2MASS, and APOGEE-DR14, Aug-2018, Astronomy & Astrophysics, Vol. 616, pp. A132, https://doi.org/10.1051/0004-6361/201832832
  38. [Lallement2019] Lallement, R. and Babusiaux, C. and Vergely, J.L. and Katz, D. and Arenou, F. and Valette, B. and Hottier, C. and Capitanio, L.: Gaia-2MASS 3D maps of Galactic interstellar dust within 3 kpc, May-2019, Astronomy & Astrophysics, Vol. 625, pp. A135, https://doi.org/10.1051/0004-6361/201834695
  39. [Lallement2022] Lallement, R. and Vergely, J.L. and Babusiaux, C. and Cox, N.L.J.: Updated Gaia-2MASS 3D maps of Galactic interstellar dust, May-2022, Astronomy & Astrophysics, Vol. 661, pp. A147, https://doi.org/10.1051/0004-6361/202142846
  40. [Leike2019] Leike, RH and Enßlin, TA: Charting nearby dust clouds using Gaia data only, 2019, Astronomy & Astrophysics, Vol. 631, pp. A32
  41. [Leike2020] Leike,R.L. and Glatzle,M. and Enßlin,T.A.: Resolving nearby dust clouds, 2020, Astronomy & Astrophysics, Vol. 639, pp. A138
  42. [Leike2022] Leike, R.H. and Edenhofer, G. and Knollmüller, J. and Alig, C. and Frank, P. and Enßlin, T.A.: The Galactic 3D large-scale dust distribution via Gaussian process regression on spherical coordinates, Apr-2022, arXiv e-prints, pp. arXiv:2204.11715, https://doi.org/10.48550/arXiv.2204.11715
  43. [Marton2022] Marton, Gábor and Ábrahám, Péter and Rimoldini, Lorenzo and Audard, Marc and Kun, Mária and Nagy, Zsófia and Kóspál, Ágnes and Szabados, László and Holl, Berry and Gavras, Panagiotis and Mowlavi, Nami and Nienartowicz, Krzysztof and Jevardat de Fombelle, Grégory and Lecoeur-Taïbi, Isabelle and Karbevska, Lea and Garcia-Lario, Pedro and Eyer, Laurent: Gaia Data Release 3 Validating the classification of variable Young Stellar Object candidates, Jun-2022, arXiv e-prints, pp. arXiv:2206.05796, https://doi.org/10.48550/arXiv.2206.05796
  44. [Mertsch2023] Mertsch, P. and Phan, V.H.M.: Bayesian inference of three-dimensional gas maps. II. Galactic HI, Mar-2023, Astronomy & Astrophysics, Vol. 671, pp. A54, https://doi.org/10.1051/0004-6361/202243326
  45. [Montegriffo2022] Montegriffo, P. and De Angeli, F. and Andrae, R. and Riello, M. and Pancino, E. and Sanna, N. and Bellazzini, M. and Evans, D.W. and Carrasco, J.M. and Sordo, R. and Busso, G. and Cacciari, C. and Jordi, C. and van Leeuwen, F. and Vallenari, A. and Altavilla, G. and Barstow, M.A. and Brown, A.G.A. and Burgess, P.W. and Castellani, M. and Cowell, S. and Davidson, M. and De Luise, F. and Delchambre, L. and Diener, C. and Fabricius, C. and Fremat, Y. and Fouesneau, M. and Gilmore, G. and Giuffrida, G. and Hambly, N.C. and Harrison, D.L. and Hidalgo, S. and Hodgkin, S.T. and Holland, G. and Marinoni, S. and Osborne, P.J. and Pagani, C. and Palaversa, L. and Piersimoni, A.M. and Pulone, L. and Ragaini, S. and Rainer, M. and Richards, P.J. and Rowell, N. and Ruz-Mieres, D. and Sarro, L.M. and Walton, N.A. and Yoldas, A.: Gaia Data Release 3: External calibration of BP/RP low-resolution spectroscopic data, Jun-2022, arXiv e-prints, pp. arXiv:2206.06205, https://doi.org/10.48550/arXiv.2206.06205
  46. [Planck2013] Planck Collaboration and Abergel, A. and Ade, P.A.R. and Aghanim, N. and Alves, M.I.R. and Aniano, G. and Armitage-Caplan, C. and Arnaud, M. and Ashdown, M. and Atrio-Barandela, F. and Aumont, J. and Baccigalupi, C. and Banday, A.J. and Barreiro, R.B. and Bartlett, J.G. and Battaner, E. and Benabed, K. and Beno^\it, A. and Benoit-Lévy, A. and Bernard, J. -P. and Bersanelli, M. and Bielewicz, P. and Bobin, J. and Bock, J.J. and Bonaldi, A. and Bond, J.R. and Borrill, J. and Bouchet, F.R. and Boulanger, F. and Bridges, M. and Bucher, M. and Burigana, C. and Butler, R.C. and Cardoso, J. -F. and Catalano, A. and Chamballu, A. and Chary, R. -R. and Chiang, H.C. and Chiang, L. -Y. and Christensen, P.R. and Church, S. and Clemens, M. and Clements, D.L. and Colombi, S. and Colombo, L.P.L. and Combet, C. and Couchot, F. and Coulais, A. and Crill, B.P. and Curto, A. and Cuttaia, F. and Danese, L. and Davies, R.D. and Davis, R.J. and de Bernardis, P. and de Rosa, A. and de Zotti, G. and Delabrouille, J. and Delouis, J. -M. and Désert, F. -X. and Dickinson, C. and Diego, J.M. and Dole, H. and Donzelli, S. and Doré, O. and Douspis, M. and Draine, B.T. and Dupac, X. and Efstathiou, G. and Enßlin, T.A. and Eriksen, H.K. and Falgarone, E. and Finelli, F. and Forni, O. and Frailis, M. and Fraisse, A.A. and Franceschi, E. and Galeotta, S. and Ganga, K. and Ghosh, T. and Giard, M. and Giardino, G. and Giraud-Héraud, Y. and González-Nuevo, J. and Górski, K.M. and Gratton, S. and Gregorio, A. and Grenier, I.A. and Gruppuso, A. and Guillet, V. and Hansen, F.K. and Hanson, D. and Harrison, D.L. and Helou, G. and Henrot-Versillé, S. and Hernández-Monteagudo, C. and Herranz, D. and Hildebrandt, S.R. and Hivon, E. and Hobson, M. and Holmes, W.A. and Hornstrup, A. and Hovest, W. and Huffenberger, K.M. and Jaffe, A.H. and Jaffe, T.R. and Jewell, J. and Joncas, G. and Jones, W.C. and Juvela, M. and Keihänen, E. and Keskitalo, R. and Kisner, T.S. and Knoche, J. and Knox, L. and Kunz, M. and Kurki-Suonio, H. and Lagache, G. and Lähteenmäki, A. and Lamarre, J. -M. and Lasenby, A. and Laureijs, R.J. and Lawrence, C.R. and Leonardi, R. and León-Tavares, J. and Lesgourgues, J. and Levrier, F. and Liguori, M. and Lilje, P.B. and Linden-V\ornle, M. and López-Caniego, M. and Lubin, P.M. and Mac'\ias-Pérez, J.F. and Maffei, B. and Maino, D. and Mandolesi, N. and Maris, M. and Marshall, D.J. and Martin, P.G. and Mart'\inez-González, E. and Masi, S. and Massardi, M. and Matarrese, S. and Matthai, F. and Mazzotta, P. and McGehee, P. and Melchiorri, A. and Mendes, L. and Mennella, A. and Migliaccio, M. and Mitra, S. and Miville-Desch^enes, M. -A. and Moneti, A. and Montier, L. and Morgante, G. and Mortlock, D. and Munshi, D. and Murphy, J.A. and Naselsky, P. and Nati, F. and Natoli, P. and Netterfield, C.B. and N\orgaard-Nielsen, H.U. and Noviello, F. and Novikov, D. and Novikov, I. and Osborne, S. and Oxborrow, C.A. and Paci, F. and Pagano, L. and Pajot, F. and Paladini, R. and Paoletti, D. and Pasian, F. and Patanchon, G. and Perdereau, O. and Perotto, L. and Perrotta, F. and Piacentini, F. and Piat, M. and Pierpaoli, E. and Pietrobon, D. and Plaszczynski, S. and Pointecouteau, E. and Polenta, G. and Ponthieu, N. and Popa, L. and Poutanen, T. and Pratt, G.W. and Prézeau, G. and Prunet, S. and Puget, J. -L. and Rachen, J.P. and Reach, W.T. and Rebolo, R. and Reinecke, M. and Remazeilles, M. and Renault, C. and Ricciardi, S. and Riller, T. and Ristorcelli, I. and Rocha, G. and Rosset, C. and Roudier, G. and Rowan-Robinson, M. and Rubiño-Mart'\in, J.A. and Rusholme, B. and Sandri, M. and Santos, D. and Savini, G. and Scott, D. and Seiffert, M.D. and Shellard, E.P.S. and Spencer, L.D. and Starck, J. -L. and Stolyarov, V. and Stompor, R. and Sudiwala, R. and Sunyaev, R. and Sureau, F. and Sutton, D. and Suur-Uski, A. -S. and Sygnet, J. -F. and Tauber, J.A. and Tavagnacco, D. and Terenzi, L. and Toffolatti, L. and Tomasi, M. and Tristram, M. and Tucci, M. and Tuovinen, J. and Türler, M. and Umana, G. and Valenziano, L. and Valiviita, J. and Van Tent, B. and Verstraete, L. and Vielva, P. and Villa, F. and Vittorio, N. and Wade, L.A. and Wandelt, B.D. and Welikala, N. and Ysard, N. and Yvon, D. and Zacchei, A. and Zonca, A.: Planck 2013 results. XI. All-sky model of thermal dust emission, Nov-2014, Astronomy & Astrophysics, Vol. 571, pp. A11, https://doi.org/10.1051/0004-6361/201323195
  47. [Popescu2002] Popescu, Cristina C. and Tuffs, Richard J.: The percentage of stellar light re-radiated by dust in late-type Virgo Cluster galaxies, Sep-2002, Monthly Notices of the RAS, Vol. 335, Nr. 2, pp. L41-L44, https://doi.org/10.1046/j.1365-8711.2002.05881.x
  48. [Queiroz2023] Queiroz, Anna B.A. and Anders, Friedrich and Chiappini, Cristina and Khalatyan, Arman and Santiago, Basilio X. and Nepal, Samir and Steinmetz, Matthias and Gallart, Carme and Valentini, Marica and Dal Ponte, Marina and Barbuy, Beatriz and Pérez-Villegas, Angeles and Masseron, Thomas and Fernández-Trincado, José G. and Khoperskov, Sergey and Minchev, Ivan and Fernández-Alvar, Emma and Lane, Richard R. and Nitschelm, Christian: StarHorse results for spectroscopic surveys + Gaia DR3: Chrono-chemical populations in the solar vicinity, the genuine thick disk, and young-alpha rich stars, Mar-2023, arXiv e-prints, pp. arXiv:2303.09926, https://doi.org/10.48550/arXiv.2303.09926
  49. [Rezaei2017] Rezaei Kh., S. and Bailer-Jones, C.A.L. and Hanson, R.J. and Fouesneau, M.: Inferring the three-dimensional distribution of dust in the Galaxy with a non-parametric method . Preparing for Gaia, Feb-2017, Astronomy & Astrophysics, Vol. 598, pp. A125, https://doi.org/10.1051/0004-6361/201628885
  50. [Rezaei2018] Rezaei Kh., Sara and Bailer-Jones, Coryn A.L. and Hogg, David W. and Schultheis, Mathias: Detection of the Milky Way spiral arms in dust from 3D mapping, Oct-2018, Astronomy & Astrophysics, Vol. 618, pp. A168, https://doi.org/10.1051/0004-6361/201833284
  51. [Rezaei2020] Rezaei Kh., Sara and Bailer-Jones, Coryn A.L. and Soler, Juan D. and Zari, Eleonora: Detailed 3D structure of Orion A in dust with Gaia DR2, Nov-2020, Astronomy & Astrophysics, Vol. 643, pp. A151, https://doi.org/10.1051/0004-6361/202038708
  52. [Rezaei2022] Rezaei Kh., Sara and Kainulainen, Jouni: Three-dimensional Shape Explains Star Formation Mystery of California and Orion A, May-2022, Astrophysical Journal, Letters, Vol. 930, Nr. 2, pp. L22, https://doi.org/10.3847/2041-8213/ac67db
  53. [Rezende2015] Rezende, Danilo Jimenez and Mohamed, Shakir: Variational Inference with Normalizing Flows, 2015, http://proceedings.mlr.press/v37/rezende15.html
  54. [Roth2023DirectionDependentCalibration] Roth, Jakob and Arras, Philipp and Reinecke, Martin and Perley, Richard A. and Westermann, Rüdiger and Enßlin, Torsten A.: Bayesian radio interferometric imaging with direction-dependent calibration, May-2023, arXiv e-prints, pp. arXiv:2305.05489, https://doi.org/10.48550/arXiv.2305.05489
  55. [Roth2023FastCadenceHighContrastImaging] Roth, J. and Li Causi, G. and Testa, V. and Arras, P. and Ensslin, T.A.: Fast-cadence High-contrast Imaging with Information Field Theory, Mar-2023, Astronomical Journal, Vol. 165, Nr. 3, pp. 86, https://doi.org/10.3847/1538-3881/acabc1
  56. [Schlafly2019] Schlafly, Edward F. and Meisner, Aaron M. and Green, Gregory M.: The unWISE Catalog: Two Billion Infrared Sources from Five Years of WISE Imaging, Feb-2019, Astrophysical Journal, Supplement, Vol. 240, Nr. 2, pp. 30, https://doi.org/10.3847/1538-4365/aafbea
  57. [Selig2013] Selig, Marco and Bell, Michael R. and Junklewitz, Henrik and Oppermann, Niels and Reinecke, Martin and Greiner, Maksim and Pachajoa, Carlos and Ensslin, Torsten A.: NIFTY: A versatile Python library for signal inference, 02-2013
  58. [Skrutskie2006] Skrutskie, M.F. and Cutri, R.M. and Stiening, R. and Weinberg, M.D. and Schneider, S. and Carpenter, J.M. and Beichman, C. and Capps, R. and Chester, T. and Elias, J. and Huchra, J. and Liebert, J. and Lonsdale, C. and Monet, D.G. and Price, S. and Seitzer, P. and Jarrett, T. and Kirkpatrick, J.D. and Gizis, J.E. and Howard, E. and Evans, T. and Fowler, J. and Fullmer, L. and Hurt, R. and Light, R. and Kopan, E.L. and Marsh, K.A. and McCallon, H.L. and Tam, R. and Van Dyk, S. and Wheelock, S.: The Two Micron All Sky Survey (2MASS), Feb-2006, Astronomical Journal, Vol. 131, Nr. 2, pp. 1163-1183, https://doi.org/10.1086/498708
  59. [Steiniger2017] Steininger, Theo and Dixit, Jait and Frank, Philipp and Greiner, Maksim and Hutschenreuter, Sebastian and Knollmüller, Jakob and Leike, Reimar and Porqueres, Natalia and Pumpe, Daniel and Reinecke, Martin and others: NIFTy 3-Numerical Information Field Theory-A Python framework for multicomponent signal inference on HPC clusters, 2017, arXiv preprint arXiv:1708.01073
  60. [Tsouros2023] Tsouros, Alexandros and Edenhofer, Gordian and Enßlin, Torsten and Mastorakis, Michalis and Pavlidou, Vasiliki: Reconstructing Galactic magnetic fields from local measurements for backtracking ultra-high-energy cosmic rays, Mar-2023, arXiv e-prints, pp. arXiv:2303.10099, https://doi.org/10.48550/arXiv.2303.10099
  61. [Vergely2022] Vergely, J.L. and Lallement, R. and Cox, N.L.J.: Three-dimensional extinction maps: Inverting inter-calibrated extinction catalogues, Aug-2022, Astronomy & Astrophysics, Vol. 664, pp. A174, https://doi.org/10.1051/0004-6361/202243319
  62. [Wang2022] Wang, Chun and Huang, Yang and Yuan, Haibo and Zhang, Huawei and Xiang, Maosheng and Liu, Xiaowei: The Value-added Catalog for LAMOST DR8 Low-resolution Spectra, Apr-2022, Astrophysical Journal, Supplement, Vol. 259, Nr. 2, pp. 51, https://doi.org/10.3847/1538-4365/ac4df7
  63. [Winston2020] Winston, Elaine and Hora, Joseph L. and Tolls, Volker: A Census of Star Formation in the Outer Galaxy. II. The GLIMPSE360 Field, Aug-2020, Astronomical Journal, Vol. 160, Nr. 2, pp. 68, https://doi.org/10.3847/1538-3881/ab99c8
  64. [Wright2010] Wright, Edward L. and Eisenhardt, Peter R.M. and Mainzer, Amy K. and Ressler, Michael E. and Cutri, Roc M. and Jarrett, Thomas and Kirkpatrick, J. Davy and Padgett, Deborah and McMillan, Robert S. and Skrutskie, Michael and Stanford, S.A. and Cohen, Martin and Walker, Russell G. and Mather, John C. and Leisawitz, David and Gautier, Thomas N., III and McLean, Ian and Benford, Dominic and Lonsdale, Carol J. and Blain, Andrew and Mendez, Bryan and Irace, William R. and Duval, Valerie and Liu, Fengchuan and Royer, Don and Heinrichsen, Ingolf and Howard, Joan and Shannon, Mark and Kendall, Martha and Walsh, Amy L. and Larsen, Mark and Cardon, Joel G. and Schick, Scott and Schwalm, Mark and Abid, Mohamed and Fabinsky, Beth and Naes, Larry and Tsai, Chao-Wei: The Wide-field Infrared Survey Explorer (WISE): Mission Description and Initial On-orbit Performance, Dec-2010, Astronomical Journal, Vol. 140, Nr. 6, pp. 1868-1881, https://doi.org/10.1088/0004-6256/140/6/1868
  65. [Xiang2022] Xiang, Maosheng and Rix, Hans-Walter and Ting, Yuan-Sen and Kudritzki, Rolf-Peter and Conroy, Charlie and Zari, Eleonora and Shi, Jian-Rong and Przybilla, Norbert and Ramirez-Tannus, Maria and Tkachenko, Andrew and Gebruers, Sarah and Liu, Xiao-Wei: Stellar labels for hot stars from low-resolution spectra. I. The HotPayne method and results for 330 000 stars from LAMOST DR6, Jun-2022, Astronomy & Astrophysics, Vol. 662, pp. A66, https://doi.org/10.1051/0004-6361/202141570
  66. [Zhang2023] Zhang, Xiangyu and Green, Gregory M. and Rix, Hans-Walter: Parameters of 220 million stars from Gaia BP/RP spectra, Mar-2023, arXiv e-prints, pp. arXiv:2303.03420, https://doi.org/10.48550/arXiv.2303.03420
  67. [Zonca2019] Andrea Zonca and Leo Singer and Daniel Lenz and Martin Reinecke and Cyrille Rosset and Eric Hivon and Krzysztof Gorski: healpy: equal area pixelization and spherical harmonics transforms for data on the sphere in Python, Mar-2019, Journal of Open Source Software, Vol. 4, Nr. 35, pp. 1298, https://doi.org/10.21105/joss.01298
  68. [Zucker2019] Zucker, Catherine and Speagle, Joshua S. and Schlafly, Edward F. and Green, Gregory M. and Finkbeiner, Douglas P. and Goodman, Alyssa A. and Alves, João: A Large Catalog of Accurate Distances to Local Molecular Clouds: The Gaia DR2 Edition, Jul-2019, The Astrophysical Journal, Vol. 879, Nr. 2, pp. 125, https://doi.org/10.3847/1538-4357/ab2388
  69. [Zucker2021] Catherine Zucker and Alyssa Goodman and Joao Alves and Shmuel Bialy and Eric W. Koch and Joshua S. Speagle and Michael M. Foley and Douglas Finkbeiner and Reimar Leike and Torsten Ensslin and Joshua E. G. Peek and Gordian Edenhofer: On the Three-dimensional Structure of Local Molecular Clouds, Sep-2021, The Astrophysical Journal, Vol. 919, Nr. 1, pp. 35, https://doi.org/10.3847/1538-4357/ac1f96
]]>
https://eklausmeier.goip.de/blog/2023/07-09-profiling-php-programs-p2 https://eklausmeier.goip.de/blog/2023/07-09-profiling-php-programs-p2 Profiling PHP Programs #2 Sun, 09 Jul 2023 15:30:00 +0200 After adding a number of smaller features to Simplified Saaze I wanted to make sure that not too much fat has been added. So I profiled Simplified Saaze with XHProf. I had written on PHP profiling in Profiling PHP Programs. I used XHProf version 2.3.9 and PHP version 8.2.8 on Arch Linux kernel 6.4.1. Simplified Saaze has below version:

$ php saaze -v
Version 1.29, 08-Jul-2023, written by Elmar Klausmeier

First a run without profiler.

$ time php saaze -mortb /tmp/build
Building static site in /tmp/build...
        execute(): filePath=/home/klm/php/sndsaaze/content/aux.yml, nentries=6, totalPages=1, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/blog.yml, nentries=409, totalPages=21, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/gallery.yml, nentries=6, totalPages=1, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/music.yml, nentries=56, totalPages=3, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/error.yml, nentries=1, totalPages=1, entries_per_page=20
Finished creating 5 collections, 4 with index, and 495 entries (0.15 secs / 10.83MB)
#collections=5, YamlParser=0.0074/501-5, md2html=0.0145, MathParser=0.0078/495, renderEntry=495, content=495/0, excerpt=0/0
        real 0.17s
        user 0.15s
        sys 0
        swapped 0
        total space 0

Now with profiler for the exact same input.

$ time php saaze -mortb /tmp/build
Building static site in /tmp/build...
        execute(): filePath=/home/klm/php/sndsaaze/content/aux.yml, nentries=6, totalPages=1, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/blog.yml, nentries=409, totalPages=21, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/gallery.yml, nentries=6, totalPages=1, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/music.yml, nentries=56, totalPages=3, entries_per_page=20
        execute(): filePath=/home/klm/php/sndsaaze/content/error.yml, nentries=1, totalPages=1, entries_per_page=20
Finished creating 5 collections, 4 with index, and 495 entries (0.27 secs / 11.02MB)
#collections=5, YamlParser=0.0109/501-5, md2html=0.0205, MathParser=0.0253/495, renderEntry=495, content=495/0, excerpt=0/0
Warning: Must specify directory location for XHProf runs. Trying /tmp as default. You can either pass the directory location as an argument to the constructor for XHProfRuns_Default() or set xhprof.output_dir ini param, or set XHPROF_OUTPUT_DIR environment variable.
---------------
Assuming you have set up the http based UI for
XHProf at some address, you can view run at
http://<xhprof-ui-address>/index.php?run=64a9a7b7f24b2&source=saaze
---------------
        real 0.29s
        user 0.18s
        sys 0
        swapped 0
        total space 0

One can clearly see that XHProf makes the program almost two times slower. In our case this is no problem at all, as runtime is still way below half a second for all our roughly 500 blog posts.

Analyzing the output via GUI. We create a symlink to XHProf's PHP and JavaScript code:

cd /srv/http
ln -s /usr/share/webapps/xhprof/xhprof_html

Results are then viewed on http://localhost/xhprof_html/index.php. Below tables is the result of this data. Time unit is µs.

Overall Summary
Total Incl. Wall Time (microsec):268,275 microsecs
Total Incl. CPU (microsecs):266,828 microsecs
Total Incl. MemUse (bytes):9,671,352 bytes
Total Incl. PeakMemUse (bytes):9,792,824 bytes
Number of Function Calls:192,936

Details.

Function Name Calls Calls% Incl. Wall Time
(microsec)
IWall% Excl. Wall Time
(microsec)
EWall% Incl. CPU
(microsecs)
ICpu% Excl. CPU
(microsec)
ECPU% Incl.
MemUse
(bytes)
IMemUse% Excl.
MemUse
(bytes)
EMemUse% Incl.
PeakMemUse
(bytes)
IPeakMemUse% Excl.
PeakMemUse
(bytes)
EPeakMemUse%
main() 1 0.0% 268,275 100.0% 30 0.0% 266,828 100.0% 25 0.0% 9,671,352 100.0% -18,488 -0.2% 9,792,824 100.0% 0 0.0%
Saaze\BuildCommand::buildAllStatic 1 0.0% 267,921 99.9% 990 0.4% 266,474 99.9% 834 0.3% 9,516,976 98.4% -486,936 -5.0% 9,792,824 100.0% 0 0.0%
Saaze\BuildCommand::buildEntry 495 0.3% 165,125 61.6% 3,418 1.3% 164,740 61.7% 2,721 1.0% 1,783,216 18.4% -18,936,600 -195.8% 3,901,352 39.8% 0 0.0%
Saaze\TemplateManager::renderEntry 495 0.3% 145,793 54.3% 40,623 15.1% 145,444 54.5% 32,168 12.1% 20,529,904 212.3% -23,448,896 -242.5% 3,901,352 39.8% 466,512 4.8%
Saaze\Collection::getEntries 5 0.0% 77,137 28.8% 8 0.0% 76,211 28.6% 8 0.0% 6,791,912 70.2% 376 0.0% 5,462,264 55.8% 0 0.0%
Saaze\Collection::loadEntries 5 0.0% 74,368 27.7% 11 0.0% 73,440 27.5% 12 0.0% 6,788,976 70.2% 856 0.0% 5,447,368 55.6% 0 0.0%
Saaze\Collection::loadMkdwnRecursive 5 0.0% 74,348 27.7% 58 0.0% 73,419 27.5% 41 0.0% 6,787,744 70.2% -10,400 -0.1% 5,447,368 55.6% 0 0.0%
Saaze\Collection::loadMkdwnRecursive@1 21 0.0% 72,785 27.1% 897 0.3% 71,863 26.9% 627 0.2% 6,569,344 67.9% -10,376 -0.1% 5,440,880 55.6% 0 0.0%
Saaze\Collection::loadEntry 496 0.3% 71,986 26.8% 1,556 0.6% 71,164 26.7% 1,427 0.5% 6,732,744 69.6% -5,808 -0.1% 5,447,368 55.6% 0 0.0%
Saaze\Entry::getContentAndExcerpt 495 0.3% 52,920 19.7% 1,428 0.5% 52,056 19.5% 1,205 0.5% 3,330,256 34.4% -1,640,096 -17.0% 3,561,672 36.4% 0 0.0%
Saaze\MarkdownContentParser::toHtml 495 0.3% 46,252 17.2% 6,944 2.6% 45,396 17.0% 5,127 1.9% 4,968,160 51.4% -2,228,808 -23.0% 3,561,672 36.4% 19,840 0.2%
load::blog/entry.php 423 0.2% 18,957 7.1% 18,957 7.1% 19,027 7.1% 19,027 7.1% 7,078,088 73.2% 7,078,088 73.2% 89,408 0.9% 89,408 0.9%
load::templates/top-layout.php 525 0.3% 15,161 5.7% 15,161 5.7% 15,286 5.7% 15,286 5.7% 10,750,256 111.2% 10,750,256 111.2% 363,184 3.7% 363,184 3.7%
load::templates/read_cattag_json.php 495 0.3% 14,344 5.3% 14,344 5.3% 14,449 5.4% 14,449 5.4% 4,175,336 43.2% 4,175,336 43.2% 0 0.0% 0 0.0%
Saaze\Entry::__construct 496 0.3% 12,221 4.6% 534 0.2% 12,250 4.6% 476 0.2% 3,194,120 33.0% -2,497,920 -25.8% 1,885,696 19.3% 0 0.0%
load::templates/bottom-layout.php 525 0.3% 12,178 4.5% 12,178 4.5% 12,263 4.6% 12,263 4.6% 6,169,952 63.8% 6,169,952 63.8% 560,952 5.7% 560,952 5.7%
Saaze\Entry::parseEntry 496 0.3% 11,370 4.2% 3,225 1.2% 11,456 4.3% 2,519 0.9% 5,530,320 57.2% 4,256 0.0% 1,885,696 19.3% 1,168 0.0%
md4c_toHtml 495 0.3% 9,714 3.6% 9,714 3.6% 9,078 3.4% 9,078 3.4% 48,072 0.5% 48,072 0.5% 672 0.0% 672 0.0%
file_put_contents 529 0.3% 9,601 3.6% 9,601 3.6% 9,715 3.6% 9,715 3.6% 2,240 0.0% 2,240 0.0% 568 0.0% 568 0.0%
Saaze\BuildCommand::clearBuildDirectory 1 0.0% 8,852 3.3% 2 0.0% 8,832 3.3% 3 0.0% 20,232 0.2% 664 0.0% 0 0.0% 0 0.0%
Saaze\BuildCommand::delTree 1 0.0% 8,843 3.3% 14 0.0% 8,822 3.3% 14 0.0% 19,016 0.2% -240 -0.0% 0 0.0% 0 0.0%
Saaze\BuildCommand::delTree@1 4 0.0% 8,741 3.3% 62 0.0% 8,717 3.3% 33 0.0% 14,160 0.1% -5,392 -0.1% 0 0.0% 0 0.0%
Saaze\BuildCommand::delTree@2 31 0.0% 8,592 3.2% 797 0.3% 8,578 3.2% 569 0.2% 11,560 0.1% -124,424 -1.3% 0 0.0% 0 0.0%
printf 30,111 15.6% 8,301 3.1% 8,301 3.1% 10,966 4.1% 10,966 4.1% 2,857,640 29.5% 2,857,640 29.5% 15,656 0.2% 15,656 0.2%
substr 38,306 19.9% 7,588 2.8% 7,588 2.8% 10,921 4.1% 10,921 4.1% 9,411,528 97.3% 9,411,528 97.3% 733,920 7.5% 733,920 7.5%
strpos 14,121 7.3% 6,972 2.6% 6,972 2.6% 8,288 3.1% 8,288 3.1% 6,040 0.1% 6,040 0.1% 552 0.0% 552 0.0%
Saaze\BuildCommand::delTree@3 498 0.3% 6,778 2.5% 1,445 0.5% 6,849 2.6% 1,119 0.4% 57,720 0.6% -213,720 -2.2% 0 0.0% 0 0.0%
Saaze\BuildCommand::buildCollectionIndex 32 0.0% 6,475 2.4% 184 0.1% 6,453 2.4% 171 0.1% 57,832 0.6% -971,448 -10.0% 62,592 0.6% 0 0.0%
Saaze\Entry::slug 1,484 0.8% 5,493 2.0% 3,647 1.4% 5,661 2.1% 3,028 1.1% 234,320 2.4% -295,248 -3.1% 0 0.0% 0 0.0%
Saaze\TemplateManager::renderCollection 30 0.0% 5,446 2.0% 1,253 0.5% 5,440 2.0% 1,058 0.4% 1,019,592 10.5% -1,355,408 -14.0% 62,592 0.6% 0 0.0%
prtCatOrTag 2 0.0% 5,415 2.0% 3,192 1.2% 5,374 2.0% 2,523 0.9% 552,960 5.7% -172,840 -1.8% 0 0.0% 0 0.0%
ob_get_contents 528 0.3% 5,169 1.9% 5,169 1.9% 5,251 2.0% 5,251 2.0% 19,498,632 201.6% 19,498,632 201.6% 838,960 8.6% 838,960 8.6%
str_contains 24,694 12.8% 5,006 1.9% 5,006 1.9% 7,223 2.7% 7,223 2.7% 552 0.0% 552 0.0% 0 0.0% 0 0.0%
Saaze\MarkdownContentParser::myTag 2,026 1.1% 4,962 1.8% 3,341 1.2% 5,202 1.9% 2,879 1.1% 297,136 3.1% -766,728 -7.9% 39,400 0.4% 352 0.0%
str_word_count 495 0.3% 4,880 1.8% 4,880 1.8% 4,950 1.9% 4,950 1.9% 552 0.0% 552 0.0% 0 0.0% 0 0.0%
Saaze\MarkdownContentParser::getExcerpt 495 0.3% 4,794 1.8% 1,125 0.4% 4,818 1.8% 936 0.4% 438,712 4.5% -2,630,104 -27.2% 2,714,336 27.7% 0 0.0%
Saaze\BuildCommand::build_cat_and_tag 495 0.3% 4,133 1.5% 3,010 1.1% 4,207 1.6% 2,637 1.0% 849,496 8.8% 831,256 8.6% 0 0.0% 0 0.0%
array_map 909 0.5% 4,126 1.5% 1,715 0.6% 4,205 1.6% 1,515 0.6% 401,184 4.1% 215,496 2.2% 680 0.0% 680 0.0%
load::jscss/blogklm.css 525 0.3% 4,025 1.5% 4,025 1.5% 4,141 1.6% 4,141 1.6% 2,684,832 27.8% 2,684,832 27.8% 0 0.0% 0 0.0%
file_get_contents 502 0.3% 3,966 1.5% 3,966 1.5% 4,066 1.5% 4,066 1.5% 2,956,024 30.6% 2,956,024 30.6% 1,635,184 16.7% 1,635,184 16.7%
Saaze\MarkdownContentParser::hackLNHighlight 157 0.1% 3,904 1.5% 1,411 0.5% 3,916 1.5% 1,096 0.4% 1,750,440 18.1% -11,205,304 -115.9% 99,200 1.0% 45,528 0.5%
Saaze\Entry::getUrl 495 0.3% 3,348 1.2% 1,211 0.5% 3,405 1.3% 1,028 0.4% 89,760 0.9% -48,792 -0.5% 0 0.0% 0 0.0%
strip_tags 495 0.3% 3,321 1.2% 3,321 1.2% 3,355 1.3% 3,355 1.3% 2,779,096 28.7% 2,779,096 28.7% 2,714,336 27.7% 2,714,336 27.7%
load::templates/head.php 526 0.3% 3,276 1.2% 3,276 1.2% 3,414 1.3% 3,414 1.3% 1,589,648 16.4% 1,589,648 16.4% 0 0.0% 0 0.0%
load::templates/entry.php 70 0.0% 3,185 1.2% 3,185 1.2% 3,208 1.2% 3,208 1.2% 1,131,120 11.7% 1,131,120 11.7% 0 0.0% 0 0.0%
yaml_parse 501 0.3% 3,150 1.2% 3,150 1.2% 3,253 1.2% 3,253 1.2% 646,448 6.7% 646,448 6.7% 51,496 0.5% 51,496 0.5%
Saaze\MarkdownContentParser::inlineMath 143 0.1% 3,066 1.1% 2,013 0.8% 3,084 1.2% 1,545 0.6% 127,704 1.3% -3,570,144 -36.9% 102,712 1.0% 78,648 0.8%
str_replace 8,197 4.2% 3,038 1.1% 3,038 1.1% 4,031 1.5% 4,031 1.5% 1,545,328 16.0% 1,545,328 16.0% 0 0.0% 0 0.0%
Saaze\TemplateManager::renderGeneral 3 0.0% 2,820 1.1% 1,255 0.5% 2,801 1.0% 949 0.4% 456,216 4.7% -649,976 -6.7% 191,696 2.0% 64 0.0%
Saaze\Collection::sortEntries 5 0.0% 2,761 1.0% 18 0.0% 2,763 1.0% 16 0.0% 2,560 0.0% 856 0.0% 14,896 0.2% 0 0.0%
usort 5 0.0% 2,742 1.0% 1,680 0.6% 2,743 1.0% 1,347 0.5% 1,152 0.0% 600 0.0% 14,896 0.2% 14,896 0.2%
scandir 574 0.3% 2,739 1.0% 2,739 1.0% 2,842 1.1% 2,842 1.1% 394,936 4.1% 394,936 4.1% 0 0.0% 0 0.0%
Saaze\MarkdownContentParser::youtube 491 0.3% 2,409 0.9% 315 0.1% 2,501 0.9% 340 0.1% 240,472 2.5% -2,712 -0.0% 21,152 0.2% 256 0.0%
unlink 528 0.3% 2,334 0.9% 2,334 0.9% 2,416 0.9% 2,416 0.9% 2,904 0.0% 2,904 0.0% 0 0.0% 0 0.0%
Saaze\TemplateManager::{closure} 2,238 1.2% 2,308 0.9% 1,799 0.7% 2,532 0.9% 1,718 0.6% 185,136 1.9% 176,728 1.8% 0 0.0% 0 0.0%
Saaze\MarkdownContentParser::gallery 274 0.1% 2,298 0.9% 940 0.4% 2,337 0.9% 774 0.3% 223,264 2.3% -20,208 -0.2% 0 0.0% 0 0.0%
Saaze\BuildCommand::compRbase 525 0.3% 2,287 0.9% 1,560 0.6% 2,364 0.9% 1,273 0.5% 25,536 0.3% -72,008 -0.7% 0 0.0% 0 0.0%
is_dir 2,218 1.1% 2,173 0.8% 2,173 0.8% 2,485 0.9% 2,485 0.9% 6,128 0.1% 6,128 0.1% 0 0.0% 0 0.0%
Saaze\BuildCommand::save_cat_and_tag 1 0.0% 1,939 0.7% 459 0.2% 1,937 0.7% 365 0.1% 4,072 0.0% -773,864 -8.0% 174,920 1.8% 128 0.0%
strlen 9,506 4.9% 1,810 0.7% 1,810 0.7% 2,727 1.0% 2,727 1.0% 7,088 0.1% 7,088 0.1% 232 0.0% 232 0.0%
substr_replace 1,204 0.6% 1,804 0.7% 1,804 0.7% 1,921 0.7% 1,921 0.7% 13,529,136 139.9% 13,529,136 139.9% 90,080 0.9% 90,080 0.9%
mkdir 508 0.3% 1,746 0.7% 1,746 0.7% 1,826 0.7% 1,826 0.7% 1,088 0.0% 1,088 0.0% 0 0.0% 0 0.0%
strtotime 1,022 0.5% 1,341 0.5% 1,341 0.5% 1,482 0.6% 1,482 0.6% 1,864 0.0% 1,864 0.0% 0 0.0% 0 0.0%
date 1,549 0.8% 1,152 0.4% 1,152 0.4% 1,313 0.5% 1,313 0.5% 397,648 4.1% 397,648 4.1% 504 0.0% 504 0.0%
array_key_exists 4,747 2.5% 1,120 0.4% 1,120 0.4% 1,539 0.6% 1,539 0.6% 2,776 0.0% 2,776 0.0% 0 0.0% 0 0.0%
Saaze\MarkdownContentParser::mermaid 424 0.2% 1,118 0.4% 300 0.1% 1,180 0.4% 304 0.1% 36,752 0.4% -3,832 -0.0% 18,960 0.2% 712 0.0%
Saaze\TemplateManager::templateExists 528 0.3% 1,107 0.4% 520 0.2% 1,176 0.4% 529 0.2% 2,368 0.0% 1,816 0.0% 0 0.0% 0 0.0%
Saaze\Collection::Saaze\{closure} 4,096 2.1% 1,062 0.4% 1,062 0.4% 1,396 0.5% 1,396 0.5% 552 0.0% 552 0.0% 0 0.0% 0 0.0%
is_link 1,061 0.5% 1,008 0.4% 1,008 0.4% 1,140 0.4% 1,140 0.4% 2,144 0.0% 2,144 0.0% 0 0.0% 0 0.0%
rmdir 534 0.3% 997 0.4% 997 0.4% 1,076 0.4% 1,076 0.4% 2,144 0.0% 2,144 0.0% 0 0.0% 0 0.0%
Saaze\MarkdownContentParser::moreTag 495 0.3% 966 0.4% 334 0.1% 1,039 0.4% 340 0.1% 569,008 5.9% 648 0.0% 0 0.0% 0 0.0%
json_decode 1 0.0% 958 0.4% 958 0.4% 959 0.4% 959 0.4% 1,353,568 14.0% 1,353,568 14.0% 1,345,496 13.7% 1,345,496 13.7%
urlencode 3,340 1.7% 867 0.3% 867 0.3% 1,194 0.4% 1,194 0.4% 150,352 1.6% 150,352 1.6% 0 0.0% 0 0.0%
Saaze\MarkdownContentParser::displayMath 143 0.1% 821 0.3% 515 0.2% 846 0.3% 417 0.2% 107,592 1.1% -943,480 -9.8% 89,832 0.9% 29,872 0.3%
Saaze\MarkdownContentParser::vimeo 324 0.2% 808 0.3% 235 0.1% 861 0.3% 245 0.1% 2,368 0.0% 568 0.0% 64 0.0% 64 0.0%
Saaze\MarkdownContentParser::codepen 261 0.1% 712 0.3% 214 0.1% 745 0.3% 207 0.1% 7,288 0.1% 216 0.0% 0 0.0% 0 0.0%
Saaze\MarkdownContentParser::twitter 270 0.1% 678 0.3% 189 0.1% 727 0.3% 209 0.1% 3,280 0.0% 472 0.0% 256 0.0% 0 0.0%
Saaze\MarkdownContentParser::markmap 256 0.1% 642 0.2% 152 0.1% 685 0.3% 192 0.1% 2,256 0.0% 568 0.0% 0 0.0% 0 0.0%
FFI::string 495 0.3% 621 0.2% 621 0.2% 713 0.3% 713 0.3% 2,779,096 28.7% 2,779,096 28.7% 327,336 3.3% 327,336 3.3%
file_exists 528 0.3% 587 0.2% 587 0.2% 647 0.2% 647 0.2% 552 0.0% 552 0.0% 0 0.0% 0 0.0%
sort 1,187 0.6% 586 0.2% 586 0.2% 683 0.3% 683 0.3% 224,904 2.3% 224,904 2.3% 0 0.0% 0 0.0%
microtime 2,479 1.3% 568 0.2% 568 0.2% 875 0.3% 875 0.3% 1,640 0.0% 1,640 0.0% 0 0.0% 0 0.0%
preg_match 1,430 0.7% 549 0.2% 549 0.2% 671 0.3% 671 0.3% 1,288 0.0% 1,288 0.0% 0 0.0% 0 0.0%
getenv 1,543 0.8% 537 0.2% 537 0.2% 730 0.3% 730 0.3% 1,104 0.0% 1,104 0.0% 0 0.0% 0 0.0%
json_encode 1 0.0% 518 0.2% 518 0.2% 515 0.2% 515 0.2% 504,360 5.2% 504,360 5.2% 174,224 1.8% 174,224 1.8%
Saaze\MarkdownContentParser::tiktok 269 0.1% 504 0.2% 328 0.1% 534 0.2% 263 0.1% 4,368 0.0% 808 0.0% 2,288 0.0% 576 0.0%
load::blog/index.php 22 0.0% 443 0.2% 443 0.2% 450 0.2% 450 0.2% 113,032 1.2% 113,032 1.2% 24,896 0.3% 24,896 0.3%
strrpos 1,874 1.0% 423 0.2% 423 0.2% 609 0.2% 609 0.2% 1,632 0.0% 1,632 0.0% 0 0.0% 0 0.0%
ltrim 2,025 1.0% 418 0.2% 418 0.2% 645 0.2% 645 0.2% 136,912 1.4% 136,912 1.4% 0 0.0% 0 0.0%
implode 1,487 0.8% 384 0.1% 384 0.1% 518 0.2% 518 0.2% 453,936 4.7% 453,936 4.7% 66,352 0.7% 66,352 0.7%
rtrim 1,543 0.8% 366 0.1% 366 0.1% 458 0.2% 458 0.2% 23,672 0.2% 23,672 0.2% 0 0.0% 0 0.0%
load::saaze/MarkdownContentParser.php 1 0.0% 317 0.1% 317 0.1% 318 0.1% 318 0.1% 161,720 1.7% 161,720 1.7% 0 0.0% 0 0.0%
Saaze\CollectionArray::getCollections 1 0.0% 288 0.1% 2 0.0% 288 0.1% 3 0.0% 33,472 0.3% 712 0.0% 0 0.0% 0 0.0%
Saaze\MarkdownContentParser::wpvideo 271 0.1% 286 0.1% 149 0.1% 311 0.1% 141 0.1% 3,472 0.0% 648 0.0% 0 0.0% 0 0.0%
ob_start 528 0.3% 270 0.1% 270 0.1% 347 0.1% 347 0.1% 8,720,120 90.2% 8,720,120 90.2% 0 0.0% 0 0.0%
Saaze\CollectionArray::loadCollections 1 0.0% 259 0.1% 27 0.0% 259 0.1% 23 0.0% 29,680 0.3% -7,056 -0.1% 0 0.0% 0 0.0%
array_push 1,061 0.5% 236 0.1% 236 0.1% 360 0.1% 360 0.1% 17,672 0.2% 17,672 0.2% 0 0.0% 0 0.0%
ksort 4 0.0% 233 0.1% 233 0.1% 234 0.1% 234 0.1% 85,664 0.9% 85,664 0.9% 0 0.0% 0 0.0%
load::templates/index.php 8 0.0% 212 0.1% 212 0.1% 215 0.1% 215 0.1% 42,296 0.4% 42,296 0.4% 0 0.0% 0 0.0%
ctype_space 992 0.5% 202 0.1% 202 0.1% 298 0.1% 298 0.1% 536 0.0% 536 0.0% 0 0.0% 0 0.0%

1. array_key_exists(). Looking at above number of calls for PHP function array_key_exists() one could assume that replacing this function call with something else, might be a good idea. Therefore I checked original and possible alternative.

$ time php -r '$v=Array(); $a=microtime(true); for($i=0;$i<900000;++$i) if (array_key_exists($i,$v)) echo $a; printf("%f\n",microtime(true)-$a);'
0.011149
        real 0.04s
        user 0.02s
        sys 0
        swapped 0
        total space 0

Avoiding the function call of array_key_exists() by using PHP null coalescing operator ?? is in no way faster:

$ time php -r '$v=Array(); $a=microtime(true); for($i=0;$i<900000;++$i) if ($v[$i] ?? false) echo $a; printf("%f\n",microtime(true)-$a);'
0.027347
        real 0.06s
        user 0.05s
        sys 0
        swapped 0
        total space 0

So in this case XHProf overestimates the impact of plain PHP function calls. This was also noted in Profiling Overhead and PHP 7.

Function Name Calls Calls% Incl. Wall Time
(microsec)
IWall% Incl. CPU
(microsecs)
ICpu% Incl.
MemUse
(bytes)
IMemUse% Incl.
PeakMemUse
(bytes)
IPeakMemUse%
Current Function
array_key_exists4,747 99.7% 1,120 0.4% 1,539 0.6% 2,776 0.0% 0 0.0%
Exclusive Metrics for Current Function1,120 100.0% 1,539 100.0% 2,776 100.0% 0 N/A%
Parent functions
Saaze\BuildCommand::build_cat_and_tag3,723 78.4% 887 79.2% 1,210 78.6% 568 20.5% 0 N/A%
Saaze\Entry::getContentAndExcerpt495 10.4% 123 11.0% 167 10.9% 552 19.9% 0 N/A%
Saaze\Entry::getUrl495 10.4% 96 8.6% 151 9.8% 536 19.3% 0 N/A%
Saaze\BuildCommand::buildCollectionIndex32 0.7% 14 1.2% 11 0.7% 568 20.5% 0 N/A%
Saaze\BuildCommand::save_cat_and_tag2 0.0% 0 0.0% 0 0.0% 552 19.9% 0 N/A%

Above table clearly shows that the computation of categories and tags is the culprit for the high number of calls to array_key_exists(). This makes sense, as every blog post must be checked for their cats and tags.

2. str_word_count(). This function is needed to compute the number of words and therefore the reading time. We don't need to worry about the additional runtime required to compute these two values.

3. youtube(). Initially the number of calls of this function looked quite high, above table shows 491 calls. Similarly, the number of calls of vimeo() is also astonishingly high. These numbers can be explained by the fact that both routines are called for each part of the Markdown-file separated by either single-backslash or triple backslash.

Counting the occurences of the [youtube] tag:

cd .../content
rg --count-matches '\[youtube' | perl -ane 'printf("%5d %s",$c+=$1,$_) if /:(\d+)$/'
    1 blog/2015/05-27-commuting-to-work-with-an-e-bike.md:1
    3 blog/2015/08-16-urban-planning.md:2
    5 blog/2022/04-26-various-quotes-from-kristian-koehntopp.md:2
    6 blog/2022/05-26-upgrading-oneplus-five-to-oppo-reno4.md:1
    . . .
  426 music/2021/05-18-music-from-joachim-raff.md:7
  429 music/2021/08-08-music-from-rodrigo-riera.md:3
  442 music/2021/01-23-music-from-max-bruch.md:13

So apparently we have 442 actual YouTube videos embedded into this blog. The rest of the calls is due to the separation according backquotes. Though, youtube() function is not called per YouTube video but per part of file separated by backquote!

]]>
https://eklausmeier.goip.de/blog/2023/07-05-hosting-static-content-with-neocities https://eklausmeier.goip.de/blog/2023/07-05-hosting-static-content-with-neocities Hosting Static Content with Neocities Wed, 05 Jul 2023 16:30:00 +0200 I wrote about hosting static sites on various platforms:

  1. Hosting Static Content with surge.sh
  2. Hosting Static Content with now.sh, now.sh renamed themself to vercel.app
  3. Hosting Static Content with netlify.app
  4. Hosting Static Content with Cloudflare

This short post documents how to upload static content to Neocities. A Wiki article on Neocities is here. Neocities currently hosts more than 600,000 websites, as of July 2023. There are two plans in Neocities: a free one with 1 GB of storage and 200MB of bandwidth. A paid one with 50 GB storage and bandwith of 3,000 GB, see supporter.

For the installation process see The Neocities CLI. Certain file types are not allowed in Neocities, see Currently Allowed File Types. For example, you cannot upload C program texts or mp4-files, see below for an example in the log-output.

1. Ruby. First install Ruby, if you haven't done so before.

pacman -S ruby

This will install roughly 25 MB including a number of dependencies.

Then install neocities Ruby script via

gem install neocities

This will install the Ruby neocities script in ~/.local/share/gem/ruby/3.0.0/bin.

The neocities command line provides below subcommands:

  |\---/|
  | ~_O |   Neocities
   \_o_/

  Subcommands:
    push        Recursively upload a local directory to your site
    upload      Upload individual files to your Neocities site
    delete      Delete files from your Neocities site
    list        List files from your Neocities site
    info        Information and stats for your site
    logout      Remove the site api key from the config
    version     Unceremoniously display version and self destruct
    pizza       Order a free pizza

2. Login. The first time you access Neocities you are asked for sitename and password. The api key for your site is stored in ~/.config/neocities/config. For example, the first time you issue

neocities list /

you will be prompted for your credentials.

3. Upload. Assume all your files are located in /tmp/build including images, PDFs, JavaScript and CSS, then run

$ cd /tmp/build
$ time neocities push .
 . . .
Uploading pdf/mb3_d6-4-report-on-application-tuning-and-optimization-on-arm-platform.pdf ... SUCCESS
Uploading pdf/md4c.c ...
ERROR: pdf/md4c.c is not a valid file type (or contains not allowed content) for this site, files have not been uploaded (invalid_file_type)
Uploading pdf/meijaard2007.pdf ... SUCCESS
Uploading pdf/ms-oxoab.pdf ... SUCCESS
Uploading pdf/peerj-preprints-826.pdf ... SUCCESS
Uploading pdf/sc11-unrolling-parallel-loops.pdf ... SUCCESS
Uploading pdf/shuttle_primary_computer_system.pdf ... SUCCESS
Uploading sitemap.html ... SUCCESS
Uploading sitemap.xml ... SUCCESS
        real 1184.54s
        user 4.18s
        sys 0
        swapped 0
        total space 0

So uploading my entire website excluding content from Koehntopp, Dr. Vonhoff, Paternoster, Mobility, and Dr.-Ing. Humpich took almost 20 minutes. There is no parallelism in uploading the data. It takes almost 3 minutes to check whether all files are transferred, without actually transferring anything.

Uploading a single file goes like this:

$ neocities upload index.html
Uploading index.html to /index.html ...
SUCCESS: your file(s) have been successfully uploaded

It seems you cannot upload a symbolic link. My blog uses a couple of symbolic links, some for correcting misspellings, some for providing older content which has moved in the meantime.

4. Statistics. You can inquire usage statistics with the info subcommand.

$ neocities info eklausmeier
sitename         eklausmeier
views            408
hits             485
created_at       2019-09-28 11:25:17 +0200
last_updated     2019-09-28 19:02:34 +0200
domain
tags             ["computer", "math", "programming"]
latest_ipfs_hash

My Neocities account was created in September 2019, and since then it was viewed 400-times. That is no wonder, as it only contained a single index.html, which was uninteresting.

Stats on a Neocities account can be viewed at eklausmeier. Anyone can see these stats. There is no further information what these stats include, so I assume that they also include access from bots. As we know from Filtering Bots and Crawlers from Access.log 90% of the accesses are bots.

]]>