Unexpected usefulness of offline blogging with Hugo

Life in changing. Internet evolve and become available everywhere.. suddenly, we are more and more offline.

When you have cable connection and expect to be online only in the office or at home - you are always online, always with good (or at least the same) speed. Then, mobile internet shine and at first it had affected only content reading, but not content creation.

But now, lifestyle itself changed and people become more mobile. More and more professions become remote. That lead to more travel. And more travel lead to more time offline. Quite often: unexpectedly offline. But you still have to do your remote work in time.

Right now I’m writing this post from airport. I don’t have local sim yet, and airport wifi disconnects every hour.

With Hugo I stll can work with my website. Local server let me check everything, I can fix elder posts, can work around SEO, can write new.

What can I do with oldschool CMS which require me to be online and with stable connection? Only teke some notes to copy-paste them into CMS and edit their look later. And even copy-paste can be harmful for website layout, as for lot’s of CMS themes do not provide html validation and rely on WYSIWYG editors.

My flight was rescheduled twice and so back in Wordpress days I will lose 27 hours of working time. That’s a lot!

Spring Boot Multimodule App to Kubernetes

Previously on the series (I will use same setup)

Create docker image

mvn -am -pl web spring-boot:build-image

Where web is subproject name. According to 12factor we have to use enviromnemt variables for connections, and we will propbably have at least jdbc connection to database, so it becomes:

DATASOURCE_URL="jdbc:mysql://localhost:3306/dbname" \
DATASOURCE_USERNAME="username" \
DATASOURCE_PASSWORD="password" \
mvn -am -pl web spring-boot:build-image

(If it’s not multimodule, just skip -am -pl web)

When build is finished, we should get in the output:

Successfully built image 'docker.io/library/projectName:1.0-SNAPSHOT'

Whis image name can be used to run docker container

Run in docker

docker run --network="host" \
    -e DATASOURCE_URL="jdbc:mysql://localhost:3306/dbname" \
    -e DATASOURCE_USERNAME="username" \
    -e DATASOURCE_PASSWORD="password" \
    -p 8080:8080 \
    projectName:1.0-SNAPSHOT

Now we have the image, but kubernetes can’t access it yet. Image has to be pushed to some registry or loaded into minikube directly like this:

Load image to minikube

minikube image load projectName:1.0-SNAPSHOT

Generate .yaml for kubernetes deployment

kubectl create deployment \
    --image=projectName:1.0-SNAPSHOT \
    --dry-run=client \
    -o=yaml projectName > projectName.yaml
echo --- >> projectName.yaml
kubectl create service clusterip projectName \
    --tcp=8080:8080 \
    --dry-run=client \
    -o=yaml >> projectName.yaml

Adjust spec/template/spec/containers

Disable image pull

As for we use local image we do not need kubernetes to pull it from Docker public registry.

imagePullPolicy: Never

Connection to external database on localhost

env:
    - name: DATASOURCE_URL
        value: "jdbc:mysql://host.minikube.internal:3306/projectName"
    - name: DATASOURCE_USERNAME
        value: "projectName"
    - name: DATASOURCE_PASSWORD
        value: "projectName"

Notice host.minikube.internal this is host name for your host machine, localhost will point to the pod itself (to guest vm).

Result should look like this

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: projectName
  name: projectName
spec:
  replicas: 1
  selector:
    matchLabels:
      app: projectName
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: projectName
    spec:
      containers:
      - image: projectName:1.0-SNAPSHOT
        imagePullPolicy: Never
        name: projectName
        env:
        - name: DATASOURCE_URL
          value: "jdbc:mysql://host.minikube.internal:3306/projectName"
        - name: DATASOURCE_USERNAME
          value: "projectName"
        - name: DATASOURCE_PASSWORD
          value: "projectName"
        resources: { }
status: {}
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: projectName
  name: projectName
spec:
  ports:
  - name: 8080-8080
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: projectName
  type: ClusterIP
status:
  loadBalancer: {}

Apply projectName.yaml

kubectl apply -f projectName.yaml

After that kubectl get pods should return list of pods with state Running next to projectName-XXXXXXX-XXX

View logs

kubectl logs POD_NAME

Will shouw you console output, like

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.7.4)

...
2022-11-20 06:00:24.718  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2022-11-20 06:00:24.730  INFO 1 --- [           main] org.Application       : Started Application in 4.718 seconds (JVM running for 4.967)

Or some errors if any.

Now pod is running, but it lives iside the cluster, so we need to get access from the outside.

Access service through proxy

kubectl port-forward svc/projectName 8080:8080

Now you should be able to connect to your spring-boot application using http://localhost:8080

How to start with unit testing in python

Sponsored repty to Vasilina and Nazar

General sequence for anything in Test Driving Development

  1. Try and Fail
  2. Fix and Confirm
  3. Polish
  4. Repeat

Also knows as:

Red - Green - Refactoring - Repeat (Red/Green refer to test frameworks colored output)

So lets try and fail first

We start with nothing: no tests, no code. Just empty project folder.

Let’s run tests. What we expect from running tests in empty folder? We expect message that there are zero tests and there is nothing to run. If we get this message - that count as success.

Let’s try:

$ pytest

And the output is:

pytest: No such file or directory

We are at RED state here, we tried and failed.

Let’s fix that and get GREEN state

Install pytest:

$ pip install pytest

Output:

Installing collected packages: pytest
Successfully installed pytest-7.2.0

Confirm green state

Now, lets run tests again

$ pytest

And now we get

$ pytest
============================== test session starts ==============================
platform linux -- Python 3.10.1, pytest-7.2.0, pluggy-1.0.0
rootdir: /home/snowyurik/tmp/10
asyncio: mode=strict
collected 0 items                                                               

============================= no tests ran in 0.00s =============================

Polish

Let’s look into our code: do we have any Code Smells?
(“code smells” or “antipatterns” are rules to detect bad code, most notable are “duplicated code” and “magic” aka “hardcode”)

No code = no code smells, excellent!
Refactoring is done 😎

Repeat

To start next iteration we have to change our expectations.
Let’s expect now, that we run at least one test

Try and fail

Btw, I advice you to really do so and I’m actually doing such “pointless” actions on everyday basis.

$ pytest

And we get

$ pytest
============================== test session starts ==============================
platform linux -- Python 3.10.1, pytest-7.2.0, pluggy-1.0.0
rootdir: /home/snowyurik/tmp/10
collected 0 items                                                               

============================= no tests ran in 0.00s =============================

Same message, but as for as our expectation changed, we can’t be satisfied with it anymore. From now we will interpret no tests ran in 0.00s as RED state.

Let’s fix that and confirm the fix 😊

Add simplest possible test.
Here are some official docs https://docs.pytest.org/en/7.2.x/getting-started.html but who have time to read everything?
That will be just empty file, test.py (by the way, that’s not correct name)

$ touch test.py

Does it work?

$ pytest
============================== test session starts ==============================
..
collected 0 items                                                               

============================= no tests ran in 0.00s =============================

No

Why?

Let’s have a quick look at the docs… maybe our test file was not found automatically, so lets rename it exactly like in example:

mv test.py test_sample.py

Run pytest again, does it work now? No. So we need more, let’s copy-paste code from tutorial, this will be our test_sample.py content

# content of test_sample.py
def func(x):
    return x + 1


def test_answer():
    assert func(3) == 5

What does this code mean? We don’t care
Right now all we want is to get message from pytest that at least one test was executed, no more, no less.

$ pytest
============================= test session starts ==============================
...
collected 1 item                                                               

test_sample.py F                                                         [100%]

=================================== FAILURES ===================================
_________________________________ test_answer __________________________________

    def test_answer():
>       assert func(3) == 5
E       assert 4 == 5
E        +  where 4 = func(3)

test_sample.py:7: AssertionError
=========================== short test summary info ============================
FAILED test_sample.py::test_answer - assert 4 == 5
============================== 1 failed in 0.05s ===============================

Success!
Yep, it’s colored red, but we see that tests are executed.
And pytest shows us where exactly test failed.

Let’s polish that and make our green state actually green

Here is our test

def test_answer():
    assert func(3) == 5

What’s going on here?
Pytest (by the way, unittest for python act the same) will find functions which started with test_ and execute them.

Ok, what is assert?
assert equals “we excepect that the following statement is true”. So assert func(3) == 5 mean “we expect that func(3) will return 5”.
But it does not. So lets modify func()

def func(x):
    return x + 2

And run pytest again

$ pytest
============================== test session starts ==============================
...
collected 1 item                                                                

test_sample.py .                                                          [100%]

=============================== 1 passed in 0.01s ==============================

Now our green state is actually green, nice 🙂

Lets see if we have some code smells here and mark them with /// TODO

def func(x): /// TODO function name does not describe function
    return x + 2 /// TODO hardcoded value 2


def test_answer():
    assert func(3) == 5 /// TODO hardcoded values 3 and 5

We can’t leave it like that. Why do we have this code at first place? We just copied example. It make no sense in our project and it should not.
What are we testing at first place?
We are testing ability to run tests. Let’s modify the test so it will server only intendent purpose:
We do not need def func(x) at all, so we remove that and run pytest again:

E       NameError: name 'func' is not defined

Hmm.. let’s replace func(3) with it’s result:

def test_answer():
    assert 5 == 5

Green state. But we still have hardcoded 5. Lets just use True:

def test_answer():
    assert True

And make test name self-explanatory

def test_ifWeCanExecuteTests():
    assert True

Run pytest. State is green. What else we have? Oh yes, filename. As you remember, test.py did not work, but was it filename issue or it was lack of test functions inside? We can check:

$ mv test_sample.py test.py
$ pytest
...
============================= no tests ran in 0.00s =============================

Looks like filename should be like test<undescore><something>.py. Let’s try and make is self-explanatory

$ mv test.py test_application.py
$ pytest
...                                                                        [100%]
=============================== 1 passed in 0.00s ===============================

Refactoring done 🙄

Repeat 😅

Now we are ready to implement something real. For that we need the task.
Let it be:
“Create validator for phone number”\

Try and fail

We start from wring another test before any implementation

def test_validatePhome():
    assert isValid("+1(111)11-11-111")

And run test

E       NameError: name 'isValid' is not defined

We are at RED state.

Fix and confirm

Now our goal is to achieve GREEN state in the simplest possible way.

def isValid():
    return True
    
def test_validatePhome():
    assert isValid("+1(111)11-11-111")

Now it’s green.

Refactor

Yep, we can do that, but let’s assume we are lazy and see if TDD will force use to do that.

Repeat

Let’s add another test, we need RED state, remember.

Try and fail

def test_validatePhome():
    assert isValid("+1(111)11-11-111")
    assert isValid("+2(222)22-22-222")

Pytest and.. still green. We need red. ANY new test make no sence. As for we need red we have to write test which will fail.
How about that one

def test_validatePhome():
    assert isValid("+1(111)11-11-111")
    assert isValid("this is definitely not valid phone number") == False

Pytest:

FAILED test_application.py::test_validatePhome - AssertionError: assert True == False

Nice, RED state

Fix and confirm

Why second “phone number” is not correct? There are many answers. Let’s say it’s too long. Looks like valid phone numbers can contain maximum 15 digits. So let’s check that inside isValid

def isValid(phone):
    if( len(phone) > 15 ):
        return False
    return True

And.. our first asserting failed. Because len("+1(111)11-11-111") is actually 16. So we see that we can’t just count symbols. We need to count digits.

def isValid(phone):
    digits = sum(symbol.isdigit() for symbol in phone)
    if( digits > 15 ):
        return False
    return True

Does it work? First assertion is passed, but “this is definitely not valid phone number” actually has zero digits. Which is also not correct, so:

def isValid(phone):
    digits = sum(symbol.isdigit() for symbol in phone)
    if( digits > 15 or digits < 6 ): // 6 is minimal
        return False
    return True

And now its green 🤗

Polish

Obviously 15 and 6 are “magic numbers” aka “hardcode”. We can turm them into constants with self-explanatory names: And also move them to separate file together with isValid function, because it’s a mess now. Let’s say phoneValidator.py

MIN_PHONE_NUMBER_LENGTH = 6
MAX_PHONE_NUMBER_LENGTH = 15

def isValid(phone):
    digits = sum(symbol.isdigit() for symbol in phone)
    if( digits > MAX_PHONE_NUMBER_LENGTH or digits < MIN_PHONE_NUMBER_LENGTH ):
        return False
    return True

And for our test to work we need import it, so:

from phoneValidator import *

def test_ifWeCanExecuteTests():
    assert True
    
def test_validatePhome():
    assert isValid("+1(111)11-11-111")
    assert isValid("this is definitely not valid phone number") == False

Pytest.. green 🥳

Continue like that

Can you imagine another wrong phone number which will pass the test? Create the test and change implementation.
No? Can you imagine correct phone number which won’t pass the test? Crete the test for false-negative result and change implementation.

With iterative process like that, you will get closer and closer to ideal solution.

Can you do it faster on-sight? Maybe. If you are good at regular expressions and phone standarts you might not need tests.

But to create something with test you do not have to be good at anything including test themselfves!
Just keep it simple, keep it iterative, do small steps, always refactor old code and you will create anything. There is no limit.

Typescript+ReactJS: useState/useEffect vs component state interface/componentDidUpdate

If you are using ReactJS with Typescript there are 2 alternative code styles: class based components and function based components.

useState and useEffect belong to function component style only.

For object-oriented style we use state interface instead of useState and componentDidUpdate instead of useEffect.

Example

Function based component

import React, {useState, useEffect} from "react";

interface MyFunctionComponentProps {
    text:string;
    x: number;
    y: number;
}

const MyFunctionComponent: React.FC<MyFunctionComponentProps> = ({}) => {
    const [items, setItems] = useState<number[]>([1,2,3]);
    const [title, setTitle] = useState<string>("default title");
    
    useEffect(() => {
        // fires if entire state changed
    });
    
    useEffect(()=>{
        // fires only if title changed
    },[title]);
    
    return (
        <div className="title">{this.state.title}</div>
    )
}

See also:

Class based component

import React, { Component } from "react";
from "react-konva";

export interface MyClassComponentProps {
    text:string;
    x: number;
    y: number;
}

// this is useState hook alternative for class component
export interface MyClassComponentState {
    items: number[];
    title: string;
}

export default class MyClassComponent extends React.Component<MyClassComponentProps, MyClassComponentState> {

    // default values for props
    public static defaultProps = {
        text: "",
        x: 0,
        y: 0,
    }

    public constructor(props) {
        super(props);
        // initial component state
        this.state = {
            items: [ 1, 2, 3 ]
        };
    }
    
    // this is useEffect alternative for class component
    public componentDidUpdate(prevProps, prevState, snapshot) {
        // do something if props or state is changed
        if( prevState.title !== this.state.title ) {
            // do something if title is changed
        }
    }
    
    public render() {
        return (
            <div className="title">{this.state.title}</div>
        );
    }
}

See also https://reactjs.org/docs/react-component.html

Pros and cons

As you can see, functional component code is much shorted and more clear.. in this particular case. Anyway, there is no reason to switch from classes to functions just because of useState/useEffect hooks.

Hugo feedback problem

Ok I have this static blog now, it’s kind of useful for myslef, but I have no idea if anyone else read it. I can use some 3rd party js based tracker, but adblockers will cancel them and I that’s feels right. As for this blog is static website and hosted on github pages I can’t count visitors myself on server side.

So there is no obvious solution. So, let’s get a stap back and view bigger picture

What is the purpose of counting visitors?

  1. I need some feedback for psychological comfort. Actually, this is main reason. Pointless work lead to depression, so it should be avoided. 1.1. The blog itself is a feedback for finding solution for some complicated situations.
  2. I want to know what pasts are good and what posts are bullshit.
  3. In crisis times I want to use every opprotunity to earn something, do I have any opprtunities here?
  4. In wealthy times I want to help other people to become wealthy, because economy of scale magical effect on infrastructure which I use. More digital nomads -> cheaper flights, you know.

How can I achieve this goals without counting visitors?

A few raw ideas:

  • Donations might be better metric. Bots do not donate, you know. I do not think I will ever receive much, but donation require some effort, so I will be sure that people really like something. Even if it’s 0.0000000000000000001 cents.
    • Hmm, maybe paid comments will be nice? At least I will have resources to moderate them and to answer questions in time.
  • Mention of my blog on other websites. However, if I will post link to my blog somewhere that will ruin the metric.
  • “Hireme” is there, but it would be rare case anyway.
  • I can see traffic for my github projects, that can be affected by blog indirectly, but.. too indirectly

Donations looks like the best option for now…

.. sounds of investigation process …

Ok, I’ve registred on “Buy me a coffee”. Here is the link https://www.buymeacoffee.com/snowyurik

How to add “buy me a coffee” to hugo website?

Let’s see..

Maybe for other themes it will be different, but at least for binario theme:

  • Go to themes/binario/layouts/_default/baseof.html
  • Add <script>.. from Buy me a coffee there, widget will appear on all pages.

Ok, done with that.

Custom color for “buy me a coffee” widget

<script >.. has a parameter data-color, lets change it and see what happen.. that affects only the circle and it’s iframe. As for as I remember we can access iframe, but I do not want to dig it righ now. So lets leave it as is.

Can I make the coffee cost less?

I do not really care about money here, I will better receive more frequent feedback. Let’s see if I can change minimal donation..

It’s here https://www.buymeacoffee.com/page-settings, so I’m switching from 5$ (which is default) to 1$(which is minimal). Checking if it works.. nice 😊

Ok, if you want to say something about it - feel free to add comment to 1$ donation using the widget in right bottom corner 😅

Just a few useful .bashrc changes

I found it useful to write wrappers for some commands and put them in ~/.bashrc. When:

  • If command is too long to memorise
  • If command is not self-explainary

Here are some:

parallel() {
    nohup $@ > /dev/null 2>&1 &  
}

Usage

parallel command arg1 arg2 arg3 ...

Example

parallel kate .

This will open current folder in kate editor.

Record video from screen with ffmpeg

capturescreen() {
    ffmpeg -video_size 1920x1080 -framerate 25 -f x11grab -i :0.0+0,0 $1
}

Usage

capturescreen outfile.mp4

git branch, but sorted by last commit date and with description

gitbranches() {
    branch=""
    branches=`git branch $@`
    while read -r branch; do
        clean_branch_name=${branch//\*\ /}
        description=`git config branch.$clean_branch_name.description`
        lastcommitdate=`git for-each-ref --sort=committerdate "refs/heads/**/${clean_branch_name}" --format='%(committerdate:short)'`
        if [ "$clean_branch_name" != "$branch" ]; then
            printf "\033[0;32m";
        fi;
        printf "%-15s %s\n" "$branch    [$lastcommitdate]   $description"
        if [ "$clean_branch_name" != "$branch" ]; then
            printf "\033[0m";
        fi;
    done <<< "$branches"
}

Useful if you use branch per task, like in GitFlow. You can pass git branch attributes to this function as usual

Example

gitbranches --no-merged

Init envvars for my current projects

initmyenv() {
    if ! [[ "$PATH" =~ "$HOME//scripts:" ]]
    then
        PATH="$HOME//scripts:$PATH"
    fi
    export PATH
    export PROJECT1_DBCONNECT="Server=localhost;Database=project1;User Id=sa;Password=somepw;"
    export PROJECT2_DBCONNECT="Server=localhost;Database=project2;User Id=sa;Password=somepw;"
    ...
}

Btw, whis way you do not leak you db password to git. Search 12factor for more )

Show git graph in text mode

gitgraph() {
    git log --graph --full-history --all --color --pretty=format:"%x1b[31m%h%x09%x1b[32m%d%x1b[0m%x20%s"
}

Help for my ustom commands

helb() {
    echo ".. i need sombody, heeeelb";
    echo "
        parallel <cmd> <args>   - run command in parallel and forward stderr and stdout to /dev/null
        gitgraph                - display git log in nice form, with branch tree
        gitbranches             - display git branch with descriptions
        capturescreen   <outfile> - wrapper for ffmpeg -video_size 1920x1080 -framerate 25 -f x11grab -i :0.0+0,0 file.mp4
    ";
    # also show my custom scripts
    ls -1 ~/scripts
}

I put call to helb at the end of .bashrc, so every time I run new console I can see it. That’s it )

ResultMatcher for .addExpect

Let’s say we save some list of items and then read it back. Here is our original list:

List<MyItem> myItemList = new ArrayList<MyItem>() {{
        add( new MyItem() {{
            title = "Test Item 1";
        }});
        add( ... );
        ...
    }};

And we have mockMvc defined like:

@Autowired MockMvc mvc;

We can store the list with mvc.perform( put("/api/myitem/list" )).andExpect( status().isOk() ).

We can read the list with mvc.perform( get("/api/myitem/list" )).andExpect( status().isOk() ). But!

The problem

We do not have database ids in our original list. So .andExpect( content().json( myItemListAsJsonString ) ) will give us false-negative result.

Solution - use lambda callback for comparison

this.mvc.perform(get("/api/myitem/list"))
    .andExpect( status().isOk() )
    .andExpect( (result)->{ // lambda for ResultMatcher
                            // return void
                            // if result not match expectation - we should throw exception
    
        String jsonString = result.getResponse().getContentAsString();
        List<MyItem> storedMyItemList = objectMapper.readValue(jsonString, 
                                            new TypeReference<List<MyItem>>(){});
        for( MyItem storedItem : storedMyItemList) {
            myItemList.stream().filter( x -> storedItem.title.equals(x.title))
                .findFirst().orElseThrow(
                    () -> { // lambda for exception
                        throw new AssertionError("MyItem not found, expected: \n"
                            + myItemListAsJsonString + "\nactual: \n" + jsonString
                            + "myitem.id field is ignored"
                        );
                    }
                );
        }
    });

Maven multimodule project with Spring Boot

I will try to be close to real task here. Lets say we have RestAPI web application, command line application for manupulating with secure data (installation, initial user creation, etc). Both will work with same data, so we need 3rd project which will be linked as dependency from cli and web.

Folder structure

rootProject
 -> datalib
 -> cli
 -> web

Project types

rootProject/pom.xml should refer to type pom. Type pom means that rootProject is just a place for references from the other projects. There is no target, no .jar files for the rooProject, it’s used by subprojects (aka modules) to find each other, plus if you have same dependensy in different subprojects you can only mention them once in rootProject/pom.xml.

All subprojects / modules will have type jar in our case. So build will produce .jar files.

Linking everything together

Package name

Package name aka namespace has to be the same for rootProject and submodules.

GroupID

project/groupId also has to be the same for rootProject and submodules, like this:

<project ..>
  ...
  <groupId>org.rootProject</groupId>
  ...
</project>

file: rootProject/pom.xml

<project ..>
  ...
  <!-- modules -->
  <modules>
    <module>web</module>
    <module>cli</module>
    <module>datalib</module>
  </modules>

files: rootProject/cli/pom.xml, rootProject/web/pom.xml, rootProject/datalib/pom.xml

<project>
    ...
    <parent>
        <artifactId>rootProject</artifactId>
        <groupId>org.rootProject</groupId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    ...
</project>

But wait! Spring has to be parent project too!

Obviously, we can’t have 2 parants for the module. So we have to move spring reference to rootProject/pom.xml.

<project>
    <groupId>org.rootProject</groupId>
    <artifactId>rootProject</artifactId>
    <packaging>pom</packaging>
    <version>1.0-SNAPSHOT</version>
    <name>rootProject</name>
    
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.4</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    ...
</project>

Hierarhy is like this:

  • Spring Boot
    • rootProject
      • web
      • cli
      • datalib

Using one module from another

Right now, our modules know about rootProject but they do not know about each other. Datalib will be used in cli and web, so we need to add dependency:

files: rootProject/cli/pom.xml, rootProject/web/pom.xml

<project>
    ...
    <dependencies>
		<dependency>
			<groupId>org.rootProject</groupId>
			<artifactId>datalib</artifactId>
			<version>${project.version}</version>
			<scope>provided</scope>
			<type>jar</type>
		</dependency>
		...
    </dependencies>
    ...
</project>

Build commands

That’s a bit counterintuitive. You need 2 commands:

From rootProject/

mvn clean install

will build all projects in correct order and place relusting .jar files in local storage, where java will be able to find them. To run maven commands on subproject, you need to use -pl and -am, like this:

mvn -am -pl web spring-boot:run

or

mvn -am -pl cli test

-am means “take dependencies into consideration” without that you might get symbol not found error.

-pl just short for –project and thats just set context to module.

Spring is this loose coupled, not that loose coupled

You king of can use spring only for web module, but you will need to create wrapper components for datalib classes. And I don’t think it’s possible to use Spring based datalib in cli if cli is not Spring based. But I might be wrong.

A bit of everyday hugo commands

I’m using Hugo as blog engine. So, here is a bit of cli commands

Create new post

hugo new post/postname.md

Formatting in summary

By default summary is plain text, soo.. just use <!--mоre--> and formatting will work in summary. I’m going to make most of posts just only summary, maybe I can change hugo code, to add <!--mоre--> automatically?

Lets try..

Yes, I can change archetypes/default.md. And I also want draft to be false by default.. here we go:

---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
draft: false
---

<!--mоre-->

Btw: I used Cyrillic ‘o’ in ‘more’, otherwise it breaks layout. Maybe there is better solution, but I do not think I will need to talk about <!--mоre--> again any time soon.

Publish post

  • Go to post header section (top of .md file)
  • Remove draft: true or replace it with draft:false

Run locally

hugo server -D

Will run hugo server in “development” mode, draft blogs are visible

hugo server

will run it in “release” mode

Deploy

  • We need 2 repositories: – One is our root repository with themes and all raw data – Subproject liked to %blogname%.github.io
  • So git add %ssh.link.to.blogname.project.on.github% public
  • hugo command will compile static html content to public folder
  • cd public && git add . && git commit -m 'whatever' && git push

Also: README.md has a priority over index.html, so, remove it if you had added it during project creation.

So, it's crisis time

Hosting services become unpredictable this days. The fact is: you can loose access to your content. Using “discipline” for content backup does not work long term, even if backups are automated. But there is solution:

If you create workflow where contenets can only published if you already have a backup - you will always have a backup.

So, here we are: on githun.io, aka github pages.

  • Even if I will lose access to github - I will have it all on my laptop.
  • If I will lose my laptop - I will be able to restore cotent from git

In the past I had a blog build around transparency idea, more like diary. However.. this time I think I should focus. But we will see.

Right now, I will focus on 2 topics:

  • Techical tips, because there are a lot of new technologies which require too much search for simple things.
  • Digital nomad life tips, like “How to get cheap fresh water in Thailand”

Basically, anything that takes more than a day to figure out might worth posting.