Jekyll2022-06-26T19:00:03+02:00https://schumacher.sh/feed.xmlIT’S A UNIX SYSTEMMy personal blog, mostly tech stuff.
Jan SchumacherGetting rid of Riester, DVAG, Generali2021-12-02T13:58:29+01:002021-12-02T13:58:29+01:00https://schumacher.sh/2021/12/02/getting-rid-of-riester-dvag-generali<ul>
<li><a href="#tldr">TL;DR</a></li>
<li><a href="#kontext">Kontext</a></li>
<li><a href="#ausgangssituation">Ausgangssituation</a></li>
<li><a href="#riester">Riester</a></li>
<li><a href="#wunschpolice">Wunschpolice</a></li>
<li><a href="#versicherungspaket-young-&-home">Versicherungspaket Young & Home</a></li>
<li><a href="#alternativen">Alternativen</a></li>
<li><a href="#ausblick">Ausblick</a></li>
</ul>
<p><em>Disclamer - this post is about my experience with German private</em> Altersvorsorge <em>insurance policies starting out in 2005 (when I was 16), the realization how these policies are hostile and intransparent by design, and legal ways to get at least most of my money back. This ties into a larger context of me starting to actively take charge of my finances. I’d like to share a few thoughts on a very enlightening journey. Sadly, financial education is not a priority in Germany, which helped to give rise to these practices in the first place - but I believe it’s slowly changing for the better.</em></p>
<h2 id="tldr"><a href="#tldr">TL;DR</a></h2>
<p>Mehere Verträge bei DVAG-Berater mit 16 Jahren in 2005 abgeschlossen, dann in 2021 gekündigt / widerrufen.</p>
<ul>
<li>Riester-Rente (Generali) - keine Steuervorteile geltend gemacht, keine Kinderzulage, nach Kündigung ca. 84% des eingezahlten Geldes zurückbekommen. Sehr hohe Verwaltungs- & Vertriebskosten, schlechte Fondsentwicklung.</li>
<li>“Wunschpolice”, fondsgebundene Rentenversicherung mit Berufsunfähigkeits-Zusatzversicherung (Generali). Widerruf erfolgreich dank Beratung durch <a href="https://mayerlaw.de">Kanzlei Mayer & Mayer</a>, ca. 77% der Zahlungen zurückbekommen. Gute Fondsentwicklung, aber geringe Gewinnbeteiligung.</li>
<li>Seit 2008 - Versicherungspaket Haftpflicht / Hausrat / Glas / Unfall (Generali), mittlerweile ca. 35€ pro Monat. Glas und Unfall für mich unnütz, ebenfalls gekündigt, aber leider einjährige Frist.</li>
<li>Zwischenzeitlich von ca. 2010 bis 2013 Badenia-Bausparvertrag, erst stillgelegt, dann gekündigt. Sehr hohe Verwaltungskosten.</li>
</ul>
<p>Alternative für langfristige Investments? Zum Beispiel monatl. Zahlungen in thesaurierende ETFs (Anteil 80%), langes Halten von Aktien in Tech (Anteil 20%, zB. Cloudflare, TSMC, ASML etc.), evtl. zusätzlich ca. 10% Anteil in Crypto für Risikofreudige.</p>
<p>Weitere Empfehlung zum <a href="https://hartmutwalz.de/finanzblog/">Newsletter des Finanzblogs von Prof Hartmut Walz</a> und zur jährlichen Steuererklärung mit <a href="https://taxfix.de">Taxfix</a>.</p>
<h2 id="kontext"><a href="#kontext">Kontext</a></h2>
<p>Ich wurde 1989 in der ehemaligen DDR geboren, habe eine normale Realschullaufbahn durchzogen und bin nach Abschluss im Herbst 2005 in eine IHK-Ausbildung zum FiSi eingestiegen. Meine Eltern, die zu dem Zeitpunkt bereits mehrere Verträge bei einem Vermögensberater der DVAG (Riester plus diverse Versicherungen) abgeschlossen hatten, wollten früh mit mir an der Vorbereitung meiner Altersvorsorge arbeiten, so dass ich dem DVAG-Berater als 16-jähriger im November 2005 gegenübersaß. Der Berater hatte ein charmantes, charismatisches, respektvolles - nahezu väterliches - Auftreten und hat mich von Beginn an als gleichwertigen Erwachsenen behandelt. Mir wurden die Vorteile der verschiedenen Allianz-Versicherungspakete (später Generali) erklärt, und wie ich bereits zu Beginn meiner Karriere die Chance habe, den Grundstein für eine erfolgreiche Altersvorsorge beginnend in den 2050ern zu legen. Klingt nach einem No-Brainer, oder? Definitiv für einen 16-jährigen Berufseinsteiger - vor allem auch, da die Eltern dies seit einigen Jahren ebenso nutzen. Ich habe sogar den Vorteil, sehr jung einzusteigen! Arme Eltern! Beide waren bzw. sind übrigens in Vollzeit angestellt, so wie ich. Aber Achtung, Spoiler - meine Eltern haben ihre Riester-Verträge inzwischen bereits vor Jahren gekündigt, was beiläufig zur Sprache kam, als ich in 2021 meine aktuellen Erkenntnisse teilte.</p>
<p>Ich mache meinen Eltern dazu absolut keine Vorwürfe. Viel mehr sind solche Situationen meiner Meinung nach Resultate einer eingangs erwähnten systematisch mangelhaften finanziellen Allgemeinbildung, mit der nachwievor weit verbreiteten Einstellung Geld als Tabu-Thema zu behandeln. Dazu kommt ein oft absolutes Vertrauen in Privatversicherer bzw. deren Vermittler bezogen auf die komplexe Welt der Finanzwirtschaft mit einer oft fast schon ärztlich anmutenden, selbst auferlegten Schweigepflicht seitens des Kunden - auch aufgrund der eindringlichen Betonungen zur Vertraulichkeit bzw. Wichtigkeit dieser Beratergespräche. Meines Erachtens verschiebt diese Kultur der Intransparenz und Externalisierung finanziellen Grundwissens das Machtverhältnis zwischen Kunde und Berater in persönlichen Gesprächen massiv zu Gunsten des Beraters, der aufgrund der Vermittlungshonorare seitens der Versicherungen ein großes Interesse daran hat, so viele Produkte wie möglich zu verkaufen.</p>
<p>Disclamer - ich bin nachwievor Finanz-Laie, habe aber einige Abende im letzten Jahr genutzt, mich in meine Verträge und deren konkreten Bedingungen und Leistungen einzuarbeiten, sowie Auswege und Alternativen zu recherchieren. Der Beitrag beschreibt meine persönliche Situation, die in Teilen sicherlich für manche Personen Wiedererkennungswert haben kann, aber sich zeitgleich einzigartig ist.</p>
<h2 id="ausgangssituation"><a href="#ausgangssituation">Ausgangssituation</a></h2>
<p>Somit habe ich Ende 2005 mit 16 - als Minderjähriger zusammen mit meiner Mutter - gleich mehrere Altersvorsorge-Verträge unterschrieben. Eine klassische fondsgebundene Riester-Rente mit Beitragsdynamik, was bedeutet, dass sich die zu zahlenden monatlichen Beiträge über die Jahre erhöhen, sowie eine “Wunschpolice” - Berufsunfähigkeit mit zusätzlicher Rentenversicherung, ebenfalls dynamisch. Um 2008 kam eine weitere Kombi-Versicherung zu Haftpflicht / Unfall / Hausrat / Glas dazu. Dies waren bzw. sind allesamt Allianz- bzw. Generali-Versicherungen. Zwischenzeitlich wurde mir in 2010 ein zusätzlicher Badenia-Bausparvertrag vom besagten Berater “vermittelt”, den ich aber bereits nach wenigen Jahren wieder gekündigt habe, da ich von 2011 bis 2013 aufgrund von Schule / Studium sämtliche Zahlungen aussetzen ließ. Es war damals bereits klar, dass die kleinen von mir eingezahlten Beiträge (insgesamt ca. 250€) von den “Verwaltungsgebühren” des Bausparvertrags auch bei Zahlungspause nahezu aufgefressen worden wären. Red flag? You bet.</p>
<p>Die Generali-Verträge liefen dann vom erneuten Berufseinstieg 2013 bis zuletzt in 2021 mit entsprechender Dynamisierung weiter. Da ich ab 2013 nun wieder im Beruf gearbeitet habe, häuften sich die Anrufe bzw. Kontaktversuche des Beraters. Ich habe zu dem Zeitpunkt nicht mehr in meiner alten Heimatstadt gewohnt, und mit etwas Abstand kam die Einsicht, dass bei mir keinerlei Interesse zum Kauf weiterer Finanzprodukte bestand. Ich habe dem Berater dies ca. Ende 2013 mitgeteilt, so dass es seit dem keine weiteren Kontaktversuche seinerseits gab - abgesehen von der jährlichen, rituellen Geburtstags-Grußkarte.</p>
<p>Leider hat es noch knapp acht Jahre gedauert, bis ich mir die bestehenden Verträge kritisch angeschaut habe. Bis zu dem Zeitpunkt wurden über 20000€ von mir eingezahlt. Stichwort finanzielle Bildung. Trigger war ein Remote-Workshop mit dem Thema “Personal Finance”, an dem meine Partnerin Anfang 2021 teilnahm. Ich habe beiläufig zugehört, als Riester, DVAG, Generali etc. zur Sprache kamen, und wie die zugehörigen Produkte letztlich jegliche finanzielle Selbstbestimmung unterminieren, während die dahinter stehende Industrie kräftig verdient. Der Anfang vom Ende meiner Versicherungen.</p>
<h2 id="riester"><a href="#riester">Riester</a></h2>
<p>Wie bereits erwähnt, der Riester-Vertrag war fondsgebunden, was bedeutet, dass der Versicherer die Beträge in einem selbst geführten Aktienfonds anlegt. Übrigens - Riester ist im vorzeitigen Todesfall (vor Renteneintritt) vererbbar, allerdings nur nach Abzug der staatlichen Zulagen und Steuervorteile (Steuererklärung, Kinderzulage etc.). Weiterhin wären die Zulagen auch weg, wenn ich kurz nach Renteneintritt sterbe. Dann wird zwar noch eine Hinterbliebenenrente vom Rest des Geldes ausgezahlt, allerdings <em>nur</em> an Ehepartner oder Kinder. Wie, nicht verheiratet oder keine eigenen Kinder? Schade! Geld weg. In meinem Vertrag wurde mit einer Lebenserwartung von 91 Jahren gerechnet, so dass sich in Bezug auf die investierte Summe ebenfalls finanzielle Nachteile ergeben würden, sollte ich vorher sterben.</p>
<p>Bei Vertragsabschluss wurde mit einer jährlichen Fondsentwicklung von 8% gerechnet. Satte Gewinnbeteiligung, was erstmal sehr gut klingt. Diese Zahlen dienen nur der “Illustration einer möglichen Fondsentwicklung, ohne Garantie”, aber die Aktienmärkte hatten doch eigentlich einen ganz guten Bull Run während der letzten Jahre, trotz des Corona-Dämpfers Anfang 2020. Wie hat sich der Fonds denn entwickelt?</p>
<p><a href="https://www.dws.de/garantiefonds/lu0275643301-dws-funds-invest-vermoegensstrategie/#Performance">DWS Funds Invest Vermögensstrategie</a></p>
<p>137,60€ waren es im April 2015, dann zuletzt 139,55€ Anfang 2021, also ein Plus von unter einem Prozent über 6,5 Jahre. Hmm. Vergleich mit einem beliebten ETF über den selben Zeitraum:</p>
<p><a href="https://www.ishares.com/de/privatanleger/de/produkte/251882/ishares-msci-world-ucits-etf-acc-fund#performance">iShares Core MSCI World UCITS ETF</a></p>
<p>Plus 133,5% zwischen April 2015 und Februar 2021. Eine mehr als hundertfache Wertsteigerung des ETF im Vergleich zum Riester-Fonds, und das ist nur ein Beispiel. Viele ETFs und andere Fondspakete haben ähnliche Wertentwicklungen erlebt. Natürlich ist ein konservativ angelegter Riester-Fonds kein ETF, allerdings stellt sich dennoch die Frage, weswegen hier praktisch keine Steigerungen entstanden sind, während ein Großteil des Aktienmarkts einen deutlichen Boom erlebt hat. Fairerweise muss man sagen, dass der Fonds von Februar bis November 2021 doch noch eine Wertsteigerung von ca. 15% erfahren hat - sogar ähnlich ganz ähnlich wie besagter ETF. Allerdings leben wir in Zeiten, in denen so viel Kapital in Aktienmärkte geflossen ist wie nie zuvor.</p>
<p>Weitere Erkenntnisse nach Durchsicht der jährlichen Entwicklungsmitteilungen (Anfang 2006 bis Anfang 2021):</p>
<ul>
<li>ca. 9000€ Beiträge eingezahlt</li>
<li>PLUS ca. 2100€ staatl. Förderungen</li>
<li>MINUS ca. 1500€ “Abschluss- und Vertriebskosten”</li>
<li>MINUS ca. 1250€ “Verwaltungskosten”</li>
<li>PLUS ca. 2000€ Fondsentwicklung (bezogen auf eingezahlte Beiträge + die Förderung = 11100€)</li>
</ul>
<p>Damit ergibt sich nach 15 Jahren für Anfang 2021 ein Gesamtguthaben von ca. 10300€, was sogar noch unter der Summe der Einzahlungen und Förderungen liegt. Was für ein Reinfall. Nach kurzer Überlegung & Recherche habe ich entschieden, den Vertrag direkt zu kündigen, was mit einmonatiger Frist möglich ist. Eine weitere Option wäre die “Stilllegung” - Beitragsstopp mit Abwarten bis zum Renteneintrittsalter, wobei nachwievor “Verwaltungskosten” etc. fällig wären. Wäre eigentlich spannend gewesen, zu beobachten, ob die Entwicklung des DWS-Fonds über die Jahrzehnte gegenhalten kann und sich das Geld tatsächlich vermehrt.</p>
<p>Bei Kündigung eines Riester-Vertrags vor Beginn des Renteneintrittsalters wird von einer “schädlichen Verwendung” gesprochen, welche besagt, dass sämtliche Förderungen und steuerliche Vorteile zurückgezahlt werden müssen. Da ich keine weiteren Zulagen oder steuerlichen Vorteile beantragt habe, war die (ungefähre) Auszahlungssumme in Bezug auf den Rückkaufswert keine Überraschung - ca. 8600€ im Mai 2021. Dabei hat sich der DWS-Fonds zuletzt noch etwas besser entwicklet. Danke DWS! Nicht auszudenken, was passiert wäre, wenn ich die letzten 10 Jahre auf ETFs eingezahlt hätte (Spoiler - mache ich nun seit dem Frühjahr 2021).</p>
<h2 id="wunschpolice"><a href="#wunschpolice">Wunschpolice</a></h2>
<p>Hierbei handelt es sich um die inital erwähnte “fondsgebundene Rentenversicherung mit Risikozusatzversicherung” bezogen auf Berufsunfähigkeit. Wunderschöne Wortgruppe. Diese lief parallel zur Riester-Versicherung seit Anfang 2006. Hier gibt’s die interessante Situation, dass durch einen Rechtsspruch des Bundesgerichtshofs die Möglichkeit besteht, diese Art der Rentenversicherung unabhängig von der Vertragslaufzeit komplett zu widerrufen. Dabei geht es um fehlerhafte Widerrufsbelehrungen in den ursprünglichen Vertragsabschlüssen. Das sollte allerdings anwaltlich geprüft werden.</p>
<p>Aber der Reihe nach.</p>
<p>Während der Riestervertrag das Einkommen des Versicherers (und des Beraters) vor allem durch Abschluss- und Verwaltungskosten gesichert hat, zeichnet sich bei der Rentenversicherung ein anderes Bild ab. Hier wird ebenfalls in einen selbst geführten Fonds investiert, allerdings gibt es eine “Überschussbeteiligung” in Bezug auf die Fondsgewinne. Welcher Fonds?</p>
<p><a href="https://www.dws.de/aktienfonds/de0008476524-dws-vermoegensbildungsfonds-i-ld/">DWS Vermögensbildungsfonds I LD</a></p>
<p>Eine Wertentwicklung von über 200% über die letzten 10 Jahre! Nice! Ich habe hier dank der Dynamik knapp über 11000€ über 15 Jahre eingezahlt! Wow, die letze Wertmitteilung von Generali (Frühjahr 2021) muss ja richtig gut aussehen:</p>
<p>Gesamtwert des Fondsguthabens: 5678,11€</p>
<p>Ganz wunderbar. Gut, der Vertrag besteht zur Hälfte aus einer Risiko-Berufsunfähigkeitsversicherung, welche entsprechend eine Hälfte der Beiträge aufgefressen hat. Weiter unten in der jährlichen Wertmitteilung wird bei Kündigung des Vertrags ein Auszahlungsbetrag von 9127,60€ erwähnt, wobei Kapitalertragssteuer und Solidaritätszuschlag fällig werden. Ein Satz später heißt es, dass “die Höhe des Auszahlungsbetrags aktuell nicht mehr zutrifft, da sich der Vertragswert inzwischen entwickelt hat”. Zugegebenermaßen verwirrend. In Online-Foren heisst es weiterhin, dass bei einer Kündigung meist weniger Geld ausgezahlt wird als bei einem erfolgreichen Widerruf.</p>
<p>Dementsprechend habe ich mich zur Evaluation des möglichen Widerrufs an <a href="https://www.mayerlaw.de/">Kanzlei Mayer & Mayer</a> gewandt, welche sich auf solche Themen spezialisiert. Der Kontakt war sehr angenehm und unkompliziert via Mail. Für eine Kostenpauschale von 60€ gibt die Kanzlei eine qualifizierte Einschätzung zu den Erfolgsaussichten des Widerrufs, wobei dem Anwalt sämtliche relevanten Vertragsunterlagen (bei mir mittlerweile über 150 Seiten) bereitgestellt werden müssen. Ich empfehle Microsoft Lens als Smartphone-App zum Scannen.</p>
<p>Die Möglichkeit zum Widerspruch war in meinem Fall grundsätzlich gegeben, aber laut Anwalt aus mehreren Gründen nicht ohne Risiko. Letzlich habe ich mich für den Widerspruch entschieden, und folgendes Dokument auf Empfehlung des Anwalts an den Versicherer versandt:</p>
<blockquote>
<p><em>Sehr geehrte Damen und Herren,</em></p>
<p><em>hiermit widerspreche ich dem Zustandekommen des Vertrages Nr. XXXXX und fordere Sie auf, die von mir erbrachten Prämienzahlungen zzgl. der damit tatsächlich gezogenen Nutzungen abzgl. eines angemessenen Wertersatzes für den abstrakt genossenen Versicherungsschutz auf mein nachfolgendes Konto bis spätestens zum XX.XX.2021 auszuzahlen sowie innerhalb dieser Frist Auskunft zu den insg. gezahlten Beiträgen, den Abschlusskosten, Verwaltungskosten und Risikokosten des Vertrages sowie des aktuell vorhandenen Fondsguthabens zuerteilen.
Ich weise darauf hin, dass in meinem Widerspruch keine Kündigung des Vertrages zu sehen ist.</em></p>
<p><em>[…]</em></p>
<p><em>Weitere Prämienzahlungen erfolgen unter dem ausdrücklichen Vorbehalt der Rückforderung.</em></p>
<p><em>Mit freundlichen Grüßen</em></p>
<p><em>[…]</em></p>
</blockquote>
<p>Ungefähr drei Wochen später die Antwort des Versicherers, dass dem Widerspruch stattgegeben wird. In der Summe habe ich 11082,75€ eingezahlt. Davon werden 5696,34€ als Risikozusatzversicherungsbeiträge abgezogen. Die verbleibenden 5386,14€ werden mit einer Fondsentwicklung von 2753,80€ aufgerechnet, so dass sich ein Erstattungsbetrag von 8140,21€ ergibt. Bitter, aber wohl das bestmögliche Ergebnis in diesem Fall.</p>
<p>Letzlich würde ich aus persönlicher Erfahrung eigentlich immer den Schritt zum Versuch des Widerrufs - nach anwaltlicher Prüfung - empfehlen. Soweit ich’s richtig verstanden habe, besteht lediglich ein Risiko, dass die Versicherung den Widerruf ablehnt. Daraufhin ließe sich immernoch fristgerecht kündigen, oder eben ein Gerichtsverfahren einleiten.</p>
<h2 id="versicherungspaket-young--home"><a href="#versicherungspaket-young-&-home">Versicherungspaket Young & Home</a></h2>
<p>Das Versicherungspacket “Young & Home” inkl. Haftpflicht, Hausrat, Unfall und Glasversicherung lief seit 2008 und wurde bis 2013 wiederholt durch den DVAG-Berater angepasst, da ich mehrmals um gezogen bin. Von 2013 an hatte der Vertrag eine sechsjährige (!) Laufzeit, bis 2019. Mitterweile ebenfalls gekündigt, aber aufgrund der automatischen Verlängerung um jeweils 12 Monate noch bis Spätsommer 2022 aktiv.</p>
<p>Ich möchte nicht im Detail auf die Vertragsbestandteile eingehen, jedoch sind die Kosten bei mittlerweile knapp 36€ pro Monat. Unfallversicherung und Glasversicherung sind für mich eher unnütz, da ich beruflich keine manuelle Arbeit erledige und ich die Wahrscheinlichkeit für ein plötzlich kaputtes Fenster als sehr gering einschätze.</p>
<p>Zum Vergleich - ein entsprechendes Kombipaket aus Hausrat- und Haftpflichtversicherung mit guten Konditionen kann 60€ bis 90€ im Jahr kosten.</p>
<h2 id="alternativen"><a href="#alternativen">Alternativen</a></h2>
<p>Wie bereits eingangs beschrieben ist das Geld aus den aufgelösten Generali-Verträgen mittlerweile in ETFs (zb. den thesaurierenden A2PKXG) und vereinzelt Aktien angelegt. Nach Einzahlung einer Basissumme zu Beginn läuft der ETF als Sparplan bei der ING mit monatlicher Überweisung. Vorteile sind u.a. der “cost average effect” und die momentan sehr geringen bzw. erlassenen Fondsgebühren seitens ING beim Kauf von ETFs. Bisher haben sich die Investitionen ganz gut entwickelt, da der Aktienmarkt zumindest im November 2021 von einem “all time high” zum nächsten rennt. Da ich plane, über mehrere Jahrzehnte zu investieren, mache ich mir zu kurzfristigen Kursentwicklungen aber wenig Gedanken. Der große Vorteil - ich kann jederzeit selbst über mein Geld bestimmen.</p>
<p>Weiterhin empfehle ich, die jährlichen Steuererklärungen mit Taxfix zu erledigen. Mein Zeitaufwand beschränkt sich dank der sehr guten Smartphone-App auf ca. 1-3 Stunden. Auch wenn keine Pflicht besteht, lohnt es sich eigentlich immer, vor Allem in Zeiten von Heimarbeit. Persönliche Anekdote - nachdem ich jemanden überzeugt habe, für die Steuererklärung von einem Steuerberatungsbüro zu Taxfix zu wechseln, gab es statt jährlichen Nachzahlungen zum ersten Mal eine Erstattung von mehreren Tausend Euro.</p>
<h2 id="ausblick"><a href="#ausblick">Ausblick</a></h2>
<p>Ich hoffe, dass einzelne Punkte für Leute, die wie ich ein Interesse an finanzieller Bildung / Autonomie entwickeln, relevant sind. Vielen, vielen Dank an <a href="https://hartmutwalz.de/">Herrn Prof. Hartmut Walz</a>, die <a href="https://www.mayerlaw.de/">Kanzlei Mayer & Mayer in Freiburg</a> sowie an das <a href="https://taxfix.de">Berliner Startup Taxfix</a>.</p>Jan SchumacherTL;DR Kontext Ausgangssituation Riester Wunschpolice Versicherungspaket Young & Home Alternativen AusblickRunning a private mail server for six years, easy peasy2021-05-10T18:00:38+02:002021-05-10T18:00:38+02:00https://schumacher.sh/2021/05/10/running-a-private-mail-server-for-six-years-easy-peasy<ul>
<li><a href="#motivation">Motivation</a></li>
<li><a href="#the-right-reasons">The Right Reasons</a></li>
<li><a href="#tech-stack">Tech Stack</a></li>
<li><a href="#implementation">Implementation</a></li>
<li><a href="#mail-rate-and-spaminess">Mail rate and Spaminess</a></li>
<li><a href="#ongoing-maintenance">Ongoing Maintenance</a></li>
<li><a href="#alternatives">Alternatives</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<p>TL;DR - High-level overview of running my own, small private Linux mail server since late 2015. I’ve encountered surprisingly few issues and many valuable learnings. Initial setup (including monitoring, backups, configuration management) has taken some time, but recurring maintenance since then has been an estimated 10 - 20 minutes per month. Worth it for me, but probably not for most people. The next best thing, in my opinion, is mailbox.org with one’s own domain.</p>
<p><a href="https://news.ycombinator.com/item?id=30428882">Discussion on Hackernews</a></p>
<h2 id="motivation"><a href="#motivation">Motivation</a></h2>
<p>The main reason for this writeup is meant as a response to the sentiment I keep reading about in IT forums, that it’s “very time demanding”, “impossible to maintain”, “a pain to make sure your mails are being delivered”. I understand the reasons, and tend to agree when it comes to large-ish selfhosted mail deployments with hundreds of users and tens of thousands of mails per day, which also happens to be part of my current day job. It’s true that many IT people understandably don’t want to invest private time into things which appears to be another kind of work assignment. But personally, it fills me with satisfaction to self-host my own infrastructure, my little internet island where I’m root, especially in times of mega corporations trying (and succeeding) in redefining “the internet” as a portfolio of services only they can offer, with little alternative.</p>
<h2 id="the-right-reasons"><a href="#the-right-reasons">The Right Reasons</a></h2>
<p>As mentioned above, I’m working in Linux administration / engineering and know my way around technical aspects of mail systems. I also love to self-host stuff, and it motivates me to approach all kinds of challenges when it comes to making things work. As one of the positive side effects, I’m often able to apply the experience I’ve gained privately during my career.</p>
<p>There’s a few things one should consider before diving into selfhosting mail systems for production use. There’s a nice overview of <a href="https://bridge.grumpy-troll.org/2020/07/small-mailserver-bcp/">best practices</a> by Phil Pennock which I mostly agree with. A few more points from my own experience:</p>
<ul>
<li>Knowing that mail in itself is a painfully outdated protocol from the early days of the internet.</li>
<li>Knowing what an open relay is, and how to avoid it.</li>
<li>Familiarity with DNS, TTLs, records, zone files and specifically SPF, DKIM, DMARC.</li>
<li>Hosting should only happen in data centers, with dedicated, public IP adresses. Most residential IP spaces are likely blacklisted by default in many spam filters.</li>
<li>Knowing how to read mail headers.</li>
<li>Knowing about <a href="https://www.mail-tester.com/">mail-tester.com</a>, how it’s an awesome tool to debug one’s sending capabilites (and how there’s only a few free tries at a time).</li>
<li>Monitoring and backups should be in place, or should at least be considered. More below.</li>
<li>Researching the new public IP address with abuse DBs before implementation.</li>
<li>Knowing how to keep all parts of the system up to date.</li>
<li>Knowing that while the whole setup might be under one’s own control, and it’s possible to only allow the most secure TLS ciphers for user logins, mails between servers might still go unencrypted unless it’s specifically enforced, which might lead to deliverability problems if the other side doesn’t have encryption enabled. <a href="http://www.postfix.org/TLS_README.html#server_enable">RFC2487 even states</a> that enforcing encryption “MUST NOT be applied in case of a publicly-referenced [Postfix] SMTP server”. <a href="https://transparencyreport.google.com/safer-email/overview">According to Google</a>, about 90% to 93% of mail is encrypted in transit these days.</li>
<li>Knowing that mail is not a real-time communications medium, despite appearances. If a receiving mail server is down, the sending server might try resending for 24 - 48 hours before issuing a bounce to the original sender address. Having short downtimes is usually not a big problem with mail servers.</li>
<li>Despite doing everything correctly, sent mails might in some cases never arrive, without receiving a bounce message or any other indication something went wrong (looking at you, Microsoft).</li>
<li>Bonus points for having a trusted, technical person knowing about the setup with the ability to access the stuff in case of one’s incapacitation or death.</li>
</ul>
<h2 id="tech-stack"><a href="#tech-stack">Tech Stack</a></h2>
<p>My mail server is hosted inside a KVM / libvirt VM on a dedicated <a href="https://www.hetzner.com/dedicated-rootserver/matrix-ax">Hetzner server</a>, a hosting provider I can recommend 100%. My OS of choice is Debian stable with <a href="/keeping-latest-kernels-in-debian-with-backports-and-puppet">backported kernels</a>. An iptables script takes care of internal forwardings from hypervisor to VM. The mail server itself is Postfix, with Dovecot on top, as well as spamassassin, OpenDKIM, MySQL. There’s only one inbox with a catchall rule, where each login or service gets it’s own mail alias. Everything is monitored by Icinga2 and provisioned using Puppet. Since I’m accessing my mails only through either Thunderbird (Desktop) or K-9 Mail (Mobile), there’s no web frontend.</p>
<h2 id="implementation"><a href="#implementation">Implementation</a></h2>
<p>While I’m not going into specifics regarding postfix, dovecot, etc. it’s important to mention a few architectual details. The mail server VM (residing as a qcow2 image file inside an encrypted LV, among others) is backed up twice per week using <a href="/virtual-machine-backup-without-downtime">virsh blockcopy</a> and transferred to another remote server. This setup has proven to be quite portable. I’ve since migrated my system to newer dedicated servers several times by just deploying my basic puppet hypervisor role, executing the iptables script, copying the VM the new server and updating my DNS records. I also like to test dist-upgrades by spinning up a local copy of the VM.</p>
<p>Monitoring is underrated when it comes to selfhosting. This is something I’ve learned soon after the initial deployment, in 2016, when postfix was down for about 14 hours due to carelessness on my part. I’ve since added Icinga2 to all of my systems for internal checks, as well as adding a secondary remote AWS EC2 Icinga2 instance for monitoring the monitoring server (yo dawg…) as well as various TCP ports from the outside.</p>
<p><img src="/assets/images/icinga2_001.webp" alt="" />
<em>my main Icinga2 instance watching over my mail server</em></p>
<p>Monitoring mails are delivered to an inbox outside of my mail setup. Same for cron mails. I can recommend an Android app called <a href="https://play.google.com/store/apps/details?id=info.degois.damien.android.aNag">aNag</a>, which visualizes Icinga2 state changes through push notifications, but I’m not going so far as to add some kind of oncall alerting. If something’s down, it stays down until I have time to fix it - which, so far, has not been the case with my mail server.</p>
<h2 id="mail-rate-and-spaminess"><a href="#mail-rate-and-spaminess">Mail rate and Spaminess</a></h2>
<p>My low mail throughput is one of the likely reasons my setup has been working well. Even while being subscribed to a bunch of newsletters and services, there’s only about 20 to 40 incoming mails per week. Looking at my sent folder, there’s just about 550 outgoing mails since late 2015.</p>
<p>I’ve had exactly one problem with deliverabilty during that time, where someone with a Hotmail account complained to never have received my mail - even though the Microsoft server claimed to have accepted it according to my logs. While Microsoft can be notoriously intransparent and unforgiving with (not) accepting mail, in this case it turned out to be a blacklisting issue. I had just moved servers and IP addresses shortly before, with the new IP having been on an internal MS blacklist. I raised a ticket with their mail infrastructure department, and to my surprise, the IP was cleared soon after.</p>
<p>I rarely ever see any spam. Once every few months I’ll receive a french SEO mail, which is more of a mild curiosity than a bother, and not really worth looking into.</p>
<h2 id="ongoing-maintenance"><a href="#ongoing-maintenance">Ongoing Maintenance</a></h2>
<p>As mentioned before, I nowadays spend maybe 2 to 5 hours per year on maintance, perhaps a bit more if a Debian dist-upgrade comes along. Every once in a while I’ll grep through my mail logs out of curiosity, but there’s rarely any surprises there. I recommend implementing some kind of auto-upgrade mechanism for security updates as well as subscriptions to various mailing lists, such as <a href="https://www.debian.org/security/">Debian Security</a>.</p>
<h2 id="alternatives"><a href="#alternatives">Alternatives</a></h2>
<p>Writing all this down, it does seem to be an insanely inconveniencing thing to do, but I’ve invested many hours tuning my setup and it seems rock solid at this point. If I were to give up selfhosting, my first choice would be to migrate my domain to <a href="https://kb.mailbox.org/display/MBOKBEN/Using+e-mail+addresses+of+your+domain">mailbox.org</a>. I consider mailbox.org to be one of the <a href="https://de.wikipedia.org/wiki/Peer_Heinlein">most capable</a> and <a href="https://mailbox.org/en/company">trustworthy</a> mail providers out there. I also recently went through the steps of setting up someone elses domain with their MX servers, which has been very easy.</p>
<h2 id="conclusion"><a href="#conclusion">Conclusion</a></h2>
<p>If you’re like me, an up and coming Linux sysadmin or enthusiast, hosting one’s own mail server can add lots of valuable experience. And for better or worse, there’s no one else to blame for if something goes wrong. And soon one thing leads to another, with additional monitoring, config management, blog posts ..</p>
<p>10/10 would selfhost again.</p>Jan SchumacherMotivation The Right Reasons Tech Stack Implementation Mail rate and Spaminess Ongoing Maintenance Alternatives ConclusionUp-to-date filebeat for 32bit Raspbian (armhf)2021-03-14T14:42:23+01:002021-03-14T14:42:23+01:00https://schumacher.sh/2021/03/14/up-to-date-filebeat-for-32bit-raspbian-armhf<p>Fiddling around with ELK recently, I’ve been setting up a log server. Deploying filebeat to my Raspbian (RPi 2, 3, 4, nano) systems turned out somewhat challenging, mostly since elastic doesn’t provide official releases for 32bit ARM. There’s been an <a href="https://github.com/elastic/beats/issues/9442">open ticket</a> since 2018 asking for official ARM builds, and it seems that elastic is now at least providing .deb packages for 64bit ARM.</p>
<p>This got me thinking, what if I just compile a filebeat armhf binary and repackage the given arm64 .deb file? Turns out, it’s quite easy. Here’s my all-in-one script, tested on x64 Debian 10 and Ubuntu 20.10:</p>
<p><a href="https://gist.github.com/lazywebm/63ce309cffe6483bb5fc2d8a9e7cf50b">https://gist.github.com/lazywebm/63ce309cffe6483bb5fc2d8a9e7cf50b</a></p>
<p>The interesting stuff happens in the four last functions. Here’s a rundown:</p>
<ul>
<li>working directory is ~/Downloads/filebeat_armhf</li>
<li>get latest golang amd64 package for cross-compiling, extract to working dir, specifically use it’s given go binary (ignore any global installations)</li>
<li>get latest filebeat arm64 .deb package</li>
<li>clone beats repo, checkout latest release branch</li>
<li>build arm (armhf) filebeat binary with new go release</li>
<li>repackage given arm64 .deb with new filebeat binary, removing other binary (filebeat-god, seems to be irrelevant), update md5sums file, crontrol file</li>
<li>working dir cleanup</li>
</ul>
<p>Result of this poor man’s CI (at the time of writing) is a new deb file, ready to be deployed on Raspbian: ~/Downloads/filebeat_armhf/filebeat-7.11.2-armhf.deb</p>
<p>I have some further automation in place, deploying the new deb to a publicly available web server. A small puppet module is taking it from there:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if $facts['os']['distro']['id'] == 'Raspbian' {
# 'archive' requires puppet-archive module
archive { '/root/filebeat-7.11.2-armhf.deb':
ensure => 'present',
source => 'https://example.com/filebeat-7.11.2-armhf.deb';
}
package { 'beat':
provider => 'dpkg',
ensure => 'installed',
source => "/root/filebeat-7.11.2-armhf.deb",
require => Archive['/root/filebeat-7.11.2-armhf.deb'];
}
# filebeat config with pcfens-filebeat module here
}
</code></pre></div></div>Jan SchumacherFiddling around with ELK recently, I’ve been setting up a log server. Deploying filebeat to my Raspbian (RPi 2, 3, 4, nano) systems turned out somewhat challenging, mostly since elastic doesn’t provide official releases for 32bit ARM. There’s been an open ticket since 2018 asking for official ARM builds, and it seems that elastic is now at least providing .deb packages for 64bit ARM.Debian, QEMU, libvirt, qcow2 and fstrim2020-11-27T13:41:11+01:002020-11-27T13:41:11+01:00https://schumacher.sh/2020/11/27/debian-qemu-libvirt-qcow2-and-fstrim<p>After some discussion with colleagues on how to best approach fstrim for qcow2 on libvirt in Debian 10, I sat down one sunday afternoon researching and applying fstrim to my libvirt VMs.</p>
<ul>
<li><a href="#directive">Directive</a></li>
<li><a href="#state-of-things">State of things</a></li>
<li><a href="#research">Research</a></li>
<li><a href="#setup-part-1">Setup Part 1</a></li>
<li><a href="#little-detour">Little detour</a></li>
<li><a href="#setup-part-2">Setup Part 2</a></li>
<li><a href="#results">Results</a></li>
<li><a href="#to-do">To Do</a></li>
</ul>
<p>My hypervisors and VMs are mostly running vanilla Debian stable, which is why this post is not necessarily applicable to other distributions - but perhaps somewhat helpful nonetheless.</p>
<h2 id="directive"><a href="#directive">Directive</a></h2>
<p>The goal was to have my libvirt VMs (around two dozen across two hypervisors) automatically discard unused space from their underlying qcow2 image files. Apart from saving space, I was hoping to take some time off my <a href="https://jschumacher.info/2016/03/virtual-machine-backup-without-downtime/">online backup mechanism</a>, which can take up to four hours for seven VMs on spinning disks. The two main approaches - as far as I can see - are either to add a <code class="language-plaintext highlighter-rouge">discard</code> option to a VMs <code class="language-plaintext highlighter-rouge">fstab</code>, or use a manual fstrim timer provided by Debian. Some <a href="https://wiki.debian.org/SSDOptimization#Mounting_SSD_filesystems">more explanation here</a>. I’ll be using a custom cronjob to invoke the <code class="language-plaintext highlighter-rouge">fstrim</code> command manually every few days, more on that later.</p>
<h2 id="state-of-things"><a href="#state-of-things">State of things</a></h2>
<p>All of my VMs root systems are hosted inside qcow2 images, which I find to be more flexible than using LVM volumes. Some of these VMs have extra data partitions (eg. blockchain data, apt-mirrors) which don’t need backups and are therefore arranged as LVM volume groups. That’s why I’ll only be looking at setting up fstrim for root partitions (but extending it’s functionality across all partitons is trivial). Debian 10 ships with QEMU 3.1. Additionally, there’s one Windows 10 VM.</p>
<h2 id="research"><a href="#research">Research</a></h2>
<p>There’s a really helpful <a href="https://chrisirwin.ca/posts/discard-with-kvm-2020/">post regarding fstrim and KVM</a> by Chris Irwin for (what I’m guessing) he’s doing with non-Debian hypervisors. I recommend reading it, but here’s a summary:</p>
<ul>
<li>starting with QEMU 4.0, <code class="language-plaintext highlighter-rouge">virtio</code> supports the discard option natively</li>
<li>no need to add an additional <code class="language-plaintext highlighter-rouge">virtio-scsi</code> controller anymore</li>
<li>specific VM machine type has to be <code class="language-plaintext highlighter-rouge">pc-q35-4.0</code> and upwards</li>
</ul>
<p>Executing <code class="language-plaintext highlighter-rouge">kvm -machine help</code> on my hypervisor shows support only up to <code class="language-plaintext highlighter-rouge">pc-q35-3.1</code>, which expected with QEMU 3.1:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@atlas:~# kvm -machine help
Supported machines are:
pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-3.1)
pc-i440fx-3.1 Standard PC (i440FX + PIIX, 1996) (default)
pc-i440fx-3.0 Standard PC (i440FX + PIIX, 1996)
pc-i440fx-2.9 Standard PC (i440FX + PIIX, 1996)
[...]
q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-3.1)
pc-q35-3.1 Standard PC (Q35 + ICH9, 2009)
pc-q35-3.0 Standard PC (Q35 + ICH9, 2009)
pc-q35-2.9 Standard PC (Q35 + ICH9, 2009)
[...]
</code></pre></div></div>
<h2 id="setup-part-1"><a href="#setup-part-1">Setup Part 1</a></h2>
<p>Luckily, Debian is offering <a href="https://packages.debian.org/buster-backports/qemu">QEMU 5.0 through buster-backports</a> as of now (November 2020). After manually upgrading the respective packages, I’m now able to use <code class="language-plaintext highlighter-rouge">pc-q35-5.0</code>.</p>
<p>Note: at this point I recommend shutting down all VMs on the hypervisor that’s being worked on.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt update; apt install qemu qemu-block-extra qemu-system-common qemu-system-data qemu-system-gui qemu-system-x86 -t buster-backports
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@atlas:~# kvm -machine help
Supported machines are:
[...]
pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-5.0)
pc-i440fx-5.0 Standard PC (i440FX + PIIX, 1996) (default)
pc-i440fx-4.2 Standard PC (i440FX + PIIX, 1996)
pc-i440fx-4.1 Standard PC (i440FX + PIIX, 1996)
[...]
q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-5.0)
pc-q35-5.0 Standard PC (Q35 + ICH9, 2009)
pc-q35-4.2 Standard PC (Q35 + ICH9, 2009)
pc-q35-4.1 Standard PC (Q35 + ICH9, 2009)
[...]
</code></pre></div></div>
<p>Note: depending on your setup and configuration management, it might be recommended to set some kind of apt-pinning for qemu packages from buster-backports, as to not miss <a href="https://www.cvedetails.com/vulnerability-list/vendor_id-7506/Qemu.html">any updates</a>.</p>
<p>Now, using virt-manager and enabling it’s XML editing setting, several things need to be taken care of:</p>
<ul>
<li>for machine type, I’m using <code class="language-plaintext highlighter-rouge">q35</code>, which libvirt automatically extends to <code class="language-plaintext highlighter-rouge">pc-q35-5.0</code></li>
</ul>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><type arch="x86_64" machine="pc-q35-5.0">hvm</type>
</code></pre></div></div>
<ul>
<li>the <code class="language-plaintext highlighter-rouge">discard</code> option needs to be added to the qcow2 driver</li>
</ul>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><driver name="qemu" type="qcow2" discard="unmap"/>
</code></pre></div></div>
<p>By the way, Wordpress won’t let me add angle brackets without selecting the ugly default code type, otherwise it thinks it’s HTML code. Can’t really be bothered to explore it further, but makes me think about converting my entire page to static content some time..</p>
<p>.. which I finally did in January 2022.</p>
<h2 id="little-detour"><a href="#little-detour">Little detour</a></h2>
<p>At this point I had to apply a few more changes to VMs that were apparently created a while ago, with machine types like <code class="language-plaintext highlighter-rouge">pc-i440fx-2.8</code>. In order to apply <code class="language-plaintext highlighter-rouge">q35</code> to their configs, libvirt wanted me to change the PCI controller type from <code class="language-plaintext highlighter-rouge">pci-root</code> to <code class="language-plaintext highlighter-rouge">pcie-root</code>.</p>
<p>After booting any of these VMs, their network did not seem to come up again. With adding pcie-root, Debian cheerfully renamed the interface names according to the new PCIe bus on the virtualized systems, breaking their network settings. The naming scheme always went from the original <code class="language-plaintext highlighter-rouge">ens0</code> to <code class="language-plaintext highlighter-rouge">enp0s3</code>. <a href="https://wiki.debian.org/NetworkInterfaceNames#THE_.22PREDICTABLE_NAMES.22_SCHEME">Predictable interface names</a> anyone?</p>
<p>It was quickly rectified after manually logging into the machines local root console through virt-manager’s VNC connection and editing the network config.</p>
<p>Note: the Windows 10 VM was one of the Old Ones, but bravely handled the PCIe bus change by informing me that it’s now connected to “Network 2”. Whatever that means.</p>
<h2 id="setup-part-2"><a href="#setup-part-2">Setup Part 2</a></h2>
<p>With my VMs up and running again, fstrim should now be available:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@proxy:~# fstrim -v /
/: 56.6 GiB (60766765056 bytes) trimmed
</code></pre></div></div>
<p>Success!</p>
<p>As mentioned earlier, I’ve opted for my own custom cronjob, with a small puppet module wrapped around it.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class js::module::fstrim_kvm {
package { 'virt-what': ensure => installed }
if $facts['virtual'] == 'kvm' {
cron { 'fstrim-root':
ensure => present,
command => '/sbin/fstrim -v / >> /var/log/fstrim.log',
user => 'root',
minute => [fqdn_rand(30)],
hour => '23',
weekday => [3,7],
require => Package['virt-what'],
}
}
}
</code></pre></div></div>
<p>The cronjob requires the package <code class="language-plaintext highlighter-rouge">virt-what</code>, which puppet is using via it’s built-in fact <code class="language-plaintext highlighter-rouge">virtual</code> to determine whether the host is a KVM (QEMU) VM. The cronjob executes at a random minute (as to not have them all running at the same time) during the 23rd hour twice a week, shortly before <a href="https://jschumacher.info/2016/03/virtual-machine-backup-without-downtime/">my VM backups</a> are running. Also, if there’s a log server to pick up log data, having fstrim stats might be (mildly) interesting.</p>
<h2 id="results"><a href="#results">Results</a></h2>
<p>Comparing qcow2 images on the hypervisor before and after fstrim, the images are now taking up almost 70% less space. Very nice.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>total 148G
28G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 28G Nov 21 16:49 vm01.qcow2
21G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 101G Nov 21 16:49 vm02.qcow2
14G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 14G Nov 21 16:49 vm03.qcow2
53G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 53G Nov 21 16:49 vm04.qcow2
11G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 11G Nov 21 16:49 vm05.qcow2
6,6G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 6,7G Nov 21 16:49 vm06.qcow2
17G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 17G Nov 21 16:49 vm07.qcow2
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>total 43G
9,4G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 28G Nov 22 13:10 vm01.qcow2
7,1G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 101G Nov 22 13:10 vm02.qcow2
5,2G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 14G Nov 22 13:10 vm03.qcow2
5,0G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 53G Nov 22 13:10 vm04.qcow2
6,0G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 11G Nov 22 13:10 vm05.qcow2
2,8G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 6,8G Nov 22 13:10 vm06.qcow2
6,5G -rw-r--r-- 1 libvirt-qemu libvirt-qemu 17G Nov 22 13:10 vm07.qcow2
</code></pre></div></div>
<h2 id="to-do"><a href="#to-do">To Do</a></h2>
<p>I’m yet to implement fstrim on my Windows VM (if possible), mostly because it’s only one VM with maybe a couple of gigabytes to reclaim. Also I’m too lazy to look into it. If you have a working solution, please drop a comment.</p>Jan SchumacherAfter some discussion with colleagues on how to best approach fstrim for qcow2 on libvirt in Debian 10, I sat down one sunday afternoon researching and applying fstrim to my libvirt VMs.persistent postfix config inside PHP docker container2020-11-24T17:35:28+01:002020-11-24T17:35:28+01:00https://schumacher.sh/2020/11/24/persistent-postfix-config-inside-php-docker-container<p>One of my recent tasks included migrating an internal PHP-FPM application from a Debian 9 host (with a global PHP 7.0 installation) to a more flexible docker setup. One of the requirements was to retain the ability for the app to send mails to it’s users, which meant having a local SMTP server directly accessible to the PHP docker instance, and relaying any mails to a server on the outside.</p>
<p>I decided to set up a dockerized PHP-FPM environment through <a href="https://hub.docker.com/_/php">PHP’s official docker repo</a> using their image tagged as <em>php:7.4-fpm-buster</em>.</p>
<p>After some trial and error regarding proper RUN commands in the Dockerfile, this is what I came up with, which allows for a persistent mail server setup inside the PHP-FPM container.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>FROM php:7.4-fpm-buster
ENV TZ="Europe/Berlin"
RUN echo "date.timezone = Europe/Berlin" > /usr/local/etc/php/conf.d/timezone.ini
RUN date
RUN echo "postfix postfix/mailname string internalapp.example.com" | debconf-set-selections
RUN echo "postfix postfix/main_mailer_type string 'Internet Site'" | debconf-set-selections
RUN apt-get update && apt-get install -y postfix libldap2-dev libbz2-dev \
&& docker-php-ext-install bcmath ldap bz2
RUN postconf -e "myhostname = internalapp.example.com"
RUN postconf -e "relayhost = 172.18.0.1"
ADD launch.sh /launch.sh
CMD ["/launch.sh"]
</code></pre></div></div>
<p>Content of launch.sh:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/bash -e
/etc/init.d/postfix start
php-fpm
</code></pre></div></div>
<p>The extra launch script functions as a wrapper to execute postfix and php-fpm with the same CMD command.</p>
<p>Of course, “internalapp.example.com” is just a placeholder for the actual service URL. It’s important to set the postfix variables early through <em>debconf-set-selections</em> to allow for a promptless postfix installation later on, otherwise the container deployment gets stuck. I’ve also had to manually set the time zone, confirming it’s correctness by visually echoing <em>date</em> during deployment.</p>
<p>The <em>relayhost</em> is just the docker host itself, which is - in this case - running a postfix as well. Since I want it to act as a relay for my dockerized app, I’ve had to edit <em>/etc/postfix/main.cf</em>, allowing for relay access to it from my docker network (which has been explicitly persisted in it’s <em>docker-compose.yml</em>):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 172.18.0.0/24
</code></pre></div></div>
<p>One advantage of using the host mail server as a relay is everything gets logged in it’s local <em>mail.log</em>, which might be helpful for further debugging or auditing.</p>Jan SchumacherOne of my recent tasks included migrating an internal PHP-FPM application from a Debian 9 host (with a global PHP 7.0 installation) to a more flexible docker setup. One of the requirements was to retain the ability for the app to send mails to it’s users, which meant having a local SMTP server directly accessible to the PHP docker instance, and relaying any mails to a server on the outside.Keeping latest kernels in Debian with backports and puppet2020-11-15T12:27:06+01:002020-11-15T12:27:06+01:00https://schumacher.sh/2020/11/15/keeping-latest-kernels-in-debian-with-backports-and-puppet<p>I like running Debian stable as well as making use of recent kernels. Since I’m managing most of my infrastructure using puppet, I came up with a simple module which is included in my baseline role deployed on all systems.</p>
<p>The <a href="https://forge.puppet.com/modules/puppetlabs/apt">puppet apt module</a> is needed here.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class js::module::kernel_update {
class { 'apt':
update => {
frequency => 'daily',
}
}
if $facts['os']['architecture'] == 'amd64' {
if $facts['os']['distro']['codename'] == 'stretch' {
package {
['linux-image-amd64']:
ensure => latest,
install_options => ['-t', 'stretch-backports']
}
}
if $facts['os']['distro']['codename'] == 'buster' {
package {
['linux-image-amd64']:
ensure => latest,
install_options => ['-t', 'buster-backports']
}
}
}
}
</code></pre></div></div>
<p>Naturally the backports repo needs to be included for this to work. My sources.list.erb (also included in the baseline role) looks like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><% if @os['distro']['id'] == 'Debian' -%>
deb http://aptmirror/debian/ <%= @os['distro']['codename'] %> main contrib non-free
deb http://aptmirror/debian-security/ <%= @os['distro']['codename'] %>/updates main contrib non-free
deb http://aptmirror/debian/ <%= @os['distro']['codename'] %>-updates main contrib non-free
deb http://aptmirror/debian/ <%= @os['distro']['codename'] %>-backports main contrib non-free
deb http://apt.puppetlabs.com <%= @os['distro']['codename'] %> puppet
<% end -%>
</code></pre></div></div>
<p>Just replace ‘<code class="language-plaintext highlighter-rouge">aptmirror</code>’ with an <a href="https://www.debian.org/mirror/list">apt mirror</a> to your liking. Or <a href="https://blog.programster.org/set-up-a-local-ubuntu-mirror-with-apt-mirror">run one yourself</a>.</p>Jan SchumacherI like running Debian stable as well as making use of recent kernels. Since I’m managing most of my infrastructure using puppet, I came up with a simple module which is included in my baseline role deployed on all systems.Moving from Firefox ESR to Firefox Quantum, or bye RequestPolicy2018-04-19T13:02:08+02:002018-04-19T13:02:08+02:00https://schumacher.sh/2018/04/19/moving-from-firefox-esr-to-firefox-quantum-or-bye-requestpolicy<p>When Firefox Quantum was released last fall I switched to the ESR branch, currently on v52.7.3. My main - and pretty much only - reason for not using Quantum until now was due to incompatibilites with addons not written as native WebExtensions. It’s been over six months since Quantums initial release, and as more WebExtension addons are availabe, I wanted to see if I’d be comfortable with moving on as well.</p>
<p>First of all, Quantum feels much faster than the old Firefox, even with a dozen enabled addons. My main concern was with <a href="https://requestpolicycontinued.github.io/">RequestPolicy Continued</a>, which I used for years to build my own whitelist in order to keep out as much browser tracking as possible. Since there is still no WebExtension port, I started exploring other addons and found that <a href="https://addons.mozilla.org/de/firefox/addon/ublock-origin/">uBlock Origin</a> is capable of everything RequestPolicy can do. I’ve used UO on Firefox before, but only as a general adblock addon with default settings. By denying any 3rd-party resources globally while using the default filter lists for blocking undesired 1st-party content, uBlock Origin has broader capabilites than RequestPolicy. <a href="https://github.com/RequestPolicyContinued/requestpolicy/wiki/FAQ#using-ublock-like-requestpolicy">Here’s</a> a nice explanation. But since there’s no way to export my RP whitelist to UO, I had to start over - which is not as painful as I initially feared. UO is a lot more effective in building a global whitelist for Firefox. <a href="https://github.com/gorhill/uBlock/wiki/Blocking-mode">The UO github has good explanations</a> on it’s different blocking modes.</p>
<p>Here’s what RequestPolicy Continued on Firefox ESR (52.7.3) vs. uBlock Origin in Hard Mode with Firefox 59.0.2 looks like on heise.de.</p>
<p><img src="/assets/images/requestpolicy.webp" alt="" /></p>
<p><img src="/assets/images/ublockorigin.webp" alt="" /></p>
<p>UO is globally rejecting any 3rd-party resource by default and I can create my whitelist on each website below. Note the yellow indicator, which applies the common blocklists to all 1st-party resources. In addition, I disabled web fonts globally in UO (bottom right indicator) which renders websites a little less pretty, but works for me so far.</p>
<p>I had no problem migrating my <a href="https://addons.mozilla.org/de/firefox/addon/noscript/">NoScript</a> whitelist, since it already has a WebExtension port. A few other great privacy-related addons for Quantum include <a href="https://addons.mozilla.org/en-US/firefox/addon/cookie-autodelete/">Cookie AutoDelete</a> and <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-settings/">Privacy Settings</a>. There’s also an <a href="https://addons.mozilla.org/en-US/firefox/addon/referrer-switch/">addon disabling Referrers</a> globally, but it’s missing some functionality from <a href="https://addons.mozilla.org/de/firefox/addon/refcontrol/">RefControl</a>, which I used before.</p>
<p>Overall, I’m happy with migrating to Firefox Quantum. It’s faster, less resource-hungry and I was able to transfer all of my privacy related workflows.</p>Jan SchumacherWhen Firefox Quantum was released last fall I switched to the ESR branch, currently on v52.7.3. My main - and pretty much only - reason for not using Quantum until now was due to incompatibilites with addons not written as native WebExtensions. It’s been over six months since Quantums initial release, and as more WebExtension addons are availabe, I wanted to see if I’d be comfortable with moving on as well.Upgrading to Debian Stretch & fixing Cacti thold notification mails2018-02-05T17:09:37+01:002018-02-05T17:09:37+01:00https://schumacher.sh/2018/02/05/upgrading-to-debian-stretch-fixing-cacti-thold-notification-mails<p>With the upgrade from Debian Jessie to Stretch, the <a href="https://packages.debian.org/stretch/cacti">Cacti package</a> went from 0.8.8b to 0.8.8h. A problem I had - and apparently a few other people, according to the Cacti forums - was that Cacti 0.8.8h in combination with the <a href="https://docs.cacti.net/plugin:thold">thold v0.5 plugin</a> and Stretch refused to send “Downed device notifications”, or threshold warnings in general. Sending test emails with the Cacti settings plugin worked just fine, but that was it.</p>
<p>The issue lies with the split() function, which had been deprecated for a while and was <a href="https://secure.php.net/manual/en/function.split.php">now removed from PHP 7</a>. Cacti logged the following error:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>PHP Fatal error: Uncaught Error: Call to undefined function split() in /usr/share/cacti/site/plugins/thold/includes/polling.php:28
</code></pre></div></div>
<p>To fix the problem and have Cacti send mails again, simply replace split() with explode() in polling.php:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sed -i -e 's/split(/explode(/g' /usr/share/cacti/site/plugins/thold/includes/polling.php
</code></pre></div></div>Jan SchumacherWith the upgrade from Debian Jessie to Stretch, the Cacti package went from 0.8.8b to 0.8.8h. A problem I had - and apparently a few other people, according to the Cacti forums - was that Cacti 0.8.8h in combination with the thold v0.5 plugin and Stretch refused to send “Downed device notifications”, or threshold warnings in general. Sending test emails with the Cacti settings plugin worked just fine, but that was it.Upgrading to Debian Stretch with dovecot, postfix & opendkim2017-06-07T15:45:27+02:002017-06-07T15:45:27+02:00https://schumacher.sh/2017/06/07/upgrading-to-debian-stretch-with-dovecot-postfix-opendkim<p>Debian Stretch <a href="https://wiki.debian.org/DebianStretch">is about to be released</a>. I’m already upgrading some of my systems, and want to document a few issues I encountered after upgrading my mail server from Debian Jessie to Stretch.</p>
<h3 id="dovecot-forgot-whats-sslv2">Dovecot forgot what’s SSLv2</h3>
<p>Before the upgrade, dovecot was configured to reject login attempts with SSLv2 & SSLv3. The corresponding line in /etc/dovecot/dovecot.conf looked like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssl_protocols = !SSLv3 !SSLv2
</code></pre></div></div>
<p>After upgrading, logging into the mail server failed. Looking at the syslogs</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dovecot: imap-login: Fatal: Invalid ssl_protocols setting: Unknown protocol 'SSLv2'
</code></pre></div></div>
<p>With the upgrade to Stretch and openssl 1.1.0, support vor SSLv2 was dropped entirely. Dovecot simply doesn’t recognize the argument anymore. Editing dovecot.conf helped.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssl_protocols = !SSLv3
</code></pre></div></div>
<h3 id="opendkim-using-file-based-sockets-update-2017-10-13">opendkim using file based sockets (Update 2017-10-13)</h3>
<p>UPDATE - previous releases of opendkim on Stretch (v2.11.0) were affected by a bug, ignoring it’s own config file. See the <a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864162">Debian bug report.</a></p>
<p>The correct way to (re)configure the systemd daemon is to edit the default conf and regenerate the systemd config.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vi /etc/default/opendkim
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># listen on loopback on port 12301:
SOCKET=inet:12301@localhost
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/lib/opendkim/opendkim.service.generate
systemctl daemon-reload; systemctl restart opendkim
</code></pre></div></div>
<p>Tell postfix to use the TCP socket again, if nessecary.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vi /etc/postfix/main.cf
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># DKIM config
milter_protocol = 2
milter_default_action = accept
smtpd_milters = inet:localhost:12301
non_smtpd_milters = inet:localhost:12301
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>systemctl restart postfix
</code></pre></div></div>
<p>This should do it.</p>
<hr />
<p>Before the upgrade, opendkim (v2.9.2) was configured as an initd service using loopback to connect to postfix.</p>
<p><strong>/etc/default/opendkim</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>SOCKET="inet:12301@localhost" # listen on loopback on port 12301
</code></pre></div></div>
<p><strong>/etc/postfix/main.cf</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># DKIM config
milter_protocol = 2
milter_default_action = accept
smtpd_milters = inet:localhost:12301
non_smtpd_milters = inet:localhost:12301
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@host:~# systemctl status opendkim
opendkim.service - LSB: Start the OpenDKIM service
Loaded: loaded (/etc/init.d/opendkim)
Active: active (running) since Mi 2017-05-31 15:23:34 CEST; 6 days ago
Process: 715 ExecStart=/etc/init.d/opendkim start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/opendkim.service
├─791 /usr/sbin/opendkim -x /etc/opendkim.conf -u opendkim -P /var/run/opendkim/opendkim.pid
└─796 /usr/sbin/opendkim -x /etc/opendkim.conf -u opendkim -P /var/run/opendkim/opendkim.pid
</code></pre></div></div>
<p>During the system upgrade, opendkim daemon was reconfigured as a native systemd daemon, which meant <strong>/etc/default/opendkim</strong> and <strong>/etc/init.d/opendkim</strong> became obsolete, even though I was asked to install the new package maintainers version of /etc/default/opendkim.</p>
<p>Now the opendkim (v2.11.0) systemd daemon looked like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>opendkim.service - OpenDKIM DomainKeys Identified Mail (DKIM) Milter
Loaded: loaded (/lib/systemd/system/opendkim.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/opendkim.service.d
└─override.conf
Active: active (running) since Wed 2017-06-07 13:10:15 CEST; 23s ago
Main PID: 4806 (opendkim)
Tasks: 7 (limit: 4915)
CGroup: /system.slice/opendkim.service
├─4806 /usr/sbin/opendkim -P /var/run/opendkim/opendkim.pid -p local:/var/run/opendkim/opendkim.sock
└─4807 /usr/sbin/opendkim -P /var/run/opendkim/opendkim.pid -p local:/var/run/opendkim/opendkim.sock
</code></pre></div></div>
<p>I tried editing /etc/postfix/main.cf & adding the postfix user to the opendkim group to reflect the changes:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># DKIM config
milter_protocol = 2
milter_default_action = accept
smtpd_milters = local:/var/run/opendkim/opendkim.sock
non_smtpd_milters = local:/var/run/opendkim/opendkim.sock
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@host:~# adduser postfix opendkim
</code></pre></div></div>
<p>Restarting opendkim & postfix, the connection still failed to work.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>postfix/smtpd[4451]: warning: connect to Milter service local:/var/run/opendkim/opendkim.sock: No such file or directory
</code></pre></div></div>
<p>Some research revealed that postfix does chroot its process to /var/spool/postfix (didn’t know that). To reflect this, I created new subdirectories and edited the systemd daemon.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@host:~# mkdir -p /var/spool/postfix/var/run/opendkim
root@host:~# chown -R opendkim:opendkim /var/spool/postfix/var
root@host:~# systemctl edit opendkim
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[Service]
ExecStart=
ExecStart=/usr/sbin/opendkim -P /var/run/opendkim/opendkim.pid -p local:/var/spool/postfix/var/run/opendkim/opendkim.sock
</code></pre></div></div>
<p>Note that the double ExecStart isn’t a typo.</p>
<p>After restarting all affected services, my sent mails were getting a valid DKIM signature again.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>opendkim[11357]: OpenDKIM Filter v2.11.0 starting (args: -P /var/run/opendkim/opendkim.pid -p local:/var/spool/postfix/var/run/opendkim/opendkim.sock)
</code></pre></div></div>Jan SchumacherDebian Stretch is about to be released. I’m already upgrading some of my systems, and want to document a few issues I encountered after upgrading my mail server from Debian Jessie to Stretch.Encrypt an existing Linux installation with LUKS and LVM2016-11-17T19:22:31+01:002016-11-17T19:22:31+01:00https://schumacher.sh/2016/11/17/encrypt-an-existing-linux-installation-with-luks-and-lvm<p>An issue I encountered recently - how to encrypt an exisiting Xubuntu Setup. There are several ways to achieve this. I want to document my process I used.</p>
<p>I’m working with following assumptions:</p>
<ul>
<li>The Linux installation to be encrypted is the only OS on disk.</li>
<li>The system is a (X)Ubuntu or similar (Debian). Commands, paths to config files or package names might differ in other Distributions.</li>
<li>The system is EFI-enabled. This means there is a 512 MiB FAT partition at the beginning of the disk, containing the EFI loader. This partition <strong>has to remain untouched</strong>. If your system is using legacy boot, ignore instructions regarding EFI later on.</li>
<li>A Live Linux USB stick (e.g. Xubuntu 16.10) and a separate hard disk with at least the same size as the system drive are available and ready. When in doubt, use a disk which is larger than the system drive.</li>
<li>The entire process takes time.</li>
<li>Mistakes happen. Be ready to lose data from the installed system! Ideally, there are multiple recent backups in place.</li>
</ul>
<p>Before booting from the USB linux, prepare the Linux system by installing necessary packages & latest updates.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@host:~# apt update; apt upgrade; apt install cryptsetup pv lvm2 gparted
</code></pre></div></div>
<p>Remove old kernel images. This might take a while, depending on the age of the Linux installation.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@host:~# apt autoclean; apt autoremove
</code></pre></div></div>
<p>Shut down the computer, connect the USB disk and the second hard drive. Boot into the live system. Make sure your keyboard layout is set accordingly.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# dpkg-reconfigure keyboard-configuration
</code></pre></div></div>
<p>Install necessary packages on the live system as well.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# apt update; apt install cryptsetup pv lvm2 gparted
</code></pre></div></div>
<p>Annoyingly, my live system auto-mounted the old system disk. Unmount if necessary.</p>
<p>Use <strong>fdisk -l</strong> to check the order of drives. In my case, <strong>sda</strong> is the old system disk, <strong>sdb</strong> is the USB stick, <strong>sdc</strong> is the second hard drive. Use dd to copy the entire system disk to the second drive, with pv monitoring progress. Don’t overwrite your system.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# dd if=/dev/sda | pv --progress --eta --bytes --rate | dd of=/dev/sdc
</code></pre></div></div>
<p>When finished, open gparted and choose your system disk.</p>
<p><img src="/assets/images/encrypt_1.webp" alt="" /></p>
<p>Delete the root and swap partition, create a new boot partition (512MiB, ext4, set boot / esp flags) and create a “cleared” partition from remaining available space. Leave the EFI partition untouched. Note: if there’s no EFI boot partition, format the entire disk and create partitions as described.</p>
<p>The result looks like this:</p>
<p><img src="/assets/images/encrypt_2.webp" alt="" /></p>
<p>Consider secure erasing of the old system partition. It takes time, but leaves no trace of unencrypted data on the system drive.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# cryptsetup open --type plain /dev/sda3 container --key-file /dev/urandom
</code></pre></div></div>
<p>Proceed to create the encrypted volume on the cleared partition and choose a strong password.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# cryptsetup luksFormat -c aes-xts-plain64:sha512 -s 512 /dev/sda3
</code></pre></div></div>
<p>Open the encrypted volume.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# cryptsetup luksOpen /dev/sda3 encrypted_system
</code></pre></div></div>
<p>Create a LVM volume group and logical volumes on top of the opened LUKS volume. Note: tempo is the name I chose. Feel free to use another name for the volume group, but keep it consistent.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# pvcreate /dev/mapper/encrypted_system
root@live:~# vgcreate tempo /dev/mapper/encrypted_system
root@live:~# lvcreate -L 8G tempo -n swap
root@live:~# lvcreate -l 100%FREE tempo -n root
</code></pre></div></div>
<p>Set up the swap and root volume.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# mkswap /dev/mapper/tempo-swap
root@live:~# mkfs.ext4 /dev/mapper/tempo-root
</code></pre></div></div>
<p>Mount the new root volume to /mnt.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# mount /dev/mapper/tempo-root /mnt
</code></pre></div></div>
<p>Mount the old root partition, which has been copied to the second drive.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# mount /dev/sdc3 /media/old_root/
</code></pre></div></div>
<p>Navigate to the old root directory and use tar to copy the root system to the new LVM volume. The command doesn’t compress file input but redirects it to stdout. The output is then piped to the 2nd command where tar reads it from stdin. This way, all file & system attributes are preserved.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# cd /media/old_root/
root@live:~# tar cvf - . | tar xf - -C /mnt/
</code></pre></div></div>
<p>When finished, delete all contents from the boot directory, since this will be the mount point for the new boot partition. Use the piped tar command to copy contents from the second drive. Mount the EFI partiton as well.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# rm -rf /mnt/boot/*
root@live:~# mount /dev/sda2 /mnt/boot
root@live:~# cd /media/old_root/boot/
root@live:~# tar cvf - . | tar xf - -C /mnt/boot/
root@live:~# mount /dev/sda1 /mnt/boot/efi
</code></pre></div></div>
<p>Get the UUID of the encrypted LUKS volume. We need this later on.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# blkid /dev/sda3
/dev/sda3: UUID="0f348572-6937-410f-8e04-1b760d5d11fe" TYPE="crypto_LUKS" PARTUUID="85f58482-8b18-446a-8cb6-cfdfe30c7d55"
</code></pre></div></div>
<p>Prepare the new root system in /mnt for chroot.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# for dir in /dev /dev/pts /proc /sys /run; do mount --bind $dir /mnt/$dir; done
root@live:~# chroot /mnt
</code></pre></div></div>
<p>In the chrooted environment, we need to create or edit several config files to tell Linux where to look for the LVM swap / root volumes and how to open them. Create <strong>/etc/crypttab</strong> with the name of the volume group (tempo in my case) and the LUKS UUID we got earlier.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#
encrypted_system UUID=0f348572-6937-410f-8e04-1b760d5d11fe none luks,discard,lvm=tempo
</code></pre></div></div>
<p>Create a file named <strong>/etc/initramfs-tools/conf.d/cryptroot</strong> in the chrooted environment. Replace tempo with the name used to open the LUKS volume and the UUID of the LUKS partition.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>CRYPTROOT=target=tempo-root,source=/dev/disk/by-uuid/0f348572-6937-410f-8e04-1b760d5d11fe
</code></pre></div></div>
<p>Run the follwing command in the chrooted environment. It should pass without issues.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# update-initramfs -k all -c
</code></pre></div></div>
<p>Open <strong>/etc/default/grub</strong> in the chrooted environment. Find this line:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GRUB_CMDLINE_LINUX=""
</code></pre></div></div>
<p>Insert the appropriate values (volume group name, LUKS UUID):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GRUB_CMDLINE_LINUX="cryptops=target=tempo-root,source=/dev/disk/by-uuid/0f348572-6937-410f-8e04-1b760d5d11fe,lvm=tempo"
</code></pre></div></div>
<p>Update grub in the chrooted environment. It will read arguments from /etc/default/grub and create new boot entries.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@live:~# update-grub
</code></pre></div></div>
<p>Open <strong>/etc/fstab</strong> in the chrooted environment. Update the entry for the encrypted root and swap volume. Use blkid to find the UUID of the new boot partition. Leave the EFI partition entry untouched. My new fstab looks like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>UUID=2886e598-0d5c-4576-87e7-a234011e7725 /boot ext4 defaults 0 2
UUID=E2F4-2888 /boot/efi vfat umask=0077 0 3
/dev/mapper/tempo-root / ext4 errors=remount-ro 0 1
/dev/mapper/tempo-swap none swap sw 0 0
</code></pre></div></div>
<p>That’s it. Close the chrooted environment and shut down the computer. Remove the USB stick and second hard drive. A password prompt should appear during boot. If everything goes well, the newly encrypted system will boot. Check if all partitions are mounted accordingly. Reboot again to check if recovery mode is working as well. Note that you still have an exact copy of your system previous to encryption on the second hard drive. After verifying the encrypted system is working as intended, you might want to consider secure erasing <a href="/erasing-with-openssl">secure erasing</a> of the unencrypted disk.</p>Jan SchumacherAn issue I encountered recently - how to encrypt an exisiting Xubuntu Setup. There are several ways to achieve this. I want to document my process I used.