-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
1 lines (1 loc) · 82.2 KB
/
index.json
1
[{"content":"Introduction In this post, we will see how to use files as a flag in the Golang Command Line Interface (CLI). This is useful when you want to pass a file (not the file name, but an object of file type) as a flag to your CLI application.\nUse the file name as a flag is straightforward, you can use the flag.String or flag.StringVar function to get the file name and then do the necessary checks to validate if this exist, create it, clean it, etc. But, if you want to pass the file as a flag, you need to create a custom flag type that implements the Value interface of the flag package for the struct type that represents the file flag.\nFor impatient 😔, you can check the 👉 GitHub repository -\u0026gt; github.com/slashdevops/go-files-as-a-flag or in the 👉 Implementation section.\nCustom Flag Type To implement a custom flag type for a file flag, you need to create a struct type that represents the file flag you want to create. This struct type must implement the Value interface of the flag package.\nThis interface has the following methods:\n👉 String() string: presents the current value as a string. 👉 Set(string) error: is called once, in command line order, for each flag present. 👉 Get() interface{}: returns the contents of the Value. 👉 IsBoolFlag() bool: returns true if the flag is a boolean flag. So, I created a struct type called FileVar that has a *os.File field to store the file and implemented the Value interface methods.\nI didn\u0026rsquo;t need to use more fields because the *os.File field is enough for my use case. But, you can add more fields if you need to store more information about the file.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 type FileVar struct { *os.File } func (f *FileVar) String() string { ... (implementation is below) 👇 } func (f *FileVar) Set(value string) error { ... (implementation is below) 👇 } func (f *FileVar) Get() interface{} { ... (implementation is below) 👇 } func (f *FileVar) IsBoolFlag() bool { ... (implementation is below) 👇 } NOTE: check the lines 👉 12, 👉 17, 👉 26, 👉 37, 👉 42 of the Implementation to see how I implemented these methods.\nUsage To use the file flag in your CLI application, you need to create a new flag set using the flag.NewFlagSet function. Then, you can add the file flag using the FlagSet.Var function as is implemented in the Implementation line 👉 48 and 👉 57.\nExample of usage with --help flag:\n1 2 3 4 5 6 go run main.go --help Usage of File as Flag CLI: -file.content string content to write to the file -output.file value output file (default /dev/stdout) NOTE: as you can see, the output.file flag has the default value /dev/stdout. So, if you don\u0026rsquo;t pass this flag, the output will be written to the stdout, let\u0026rsquo;s see this in the next example.\nExample of usage with --file.content flag:\n1 2 3 4 go run main.go --file.content \u0026#39;Hello, World\u0026#39; # and the output will be: Hello, World! Now, let\u0026rsquo;s see how to use the output.file flag to write the output to a file.\nExample of usage with --file.content and --output.file flags:\n1 2 3 4 5 6 7 8 9 go run main.go \\ -file.content \u0026#39;Hello, World!\u0026#39; \\ -output.file /tmp/my-output-file.txt # let\u0026#39;s check the content of the file cat /tmp/my-output-file.txt # and the output will be: Hello, World! NOTE: as you can see, the output was written to the /tmp/my-output-file.txt file.\nCritical Points For me ☝️ the magic 🪄 of this implementation is in the Set method of the FileVar struct. This method is called once, in command line order, for each flag present. So, in this method, I open the file with the os.OpenFile function and set the *os.File field of the FileVar struct with the file descriptor.\nThe options os.O_APPEND|os.O_CREATE|os.O_WRONLY are used to open the file in write-only mode, create it if it doesn\u0026rsquo;t exist, and append the content if the file already exists.\nNOTE:, maybe you want to overwrite the file content instead of appending it, you can use the os.O_TRUNC option instead of the os.O_APPEND option.\n1 2 3 4 5 6 7 8 9 func (f *FileVar) Set(value string) error { file, err := os.OpenFile(value, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644) if err != nil { return err } f.File = file return nil } Implementation This code is available in the GitHub repository -\u0026gt; github.com/slashdevops/go-files-as-a-flag.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 package main import ( \u0026#34;flag\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;os\u0026#34; ) // FileVar is a custom flag type for files // This should implement the Value interface of the flag package // Reference: https://pkg.go.dev/gg-scm.io/tool/internal/flag#FlagSet.Var type FileVar struct { *os.File } // String presents the current value as a string. func (f *FileVar) String() string { if f.File == nil { return \u0026#34;\u0026#34; } return f.Name() } // Set is called once, in command line order, for each flag present. func (f *FileVar) Set(value string) error { file, err := os.OpenFile(value, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644) if err != nil { return err } f.File = file return nil } // Get returns the contents of the Value. func (f *FileVar) Get() interface{} { return f.File } // IsBoolFlag returns true if the flag is a boolean flag func (f *FileVar) IsBoolFlag() bool { return false } func main() { // Create a new flag set fs := flag.NewFlagSet(\u0026#34;File as Flag CLI\u0026#34;, flag.ExitOnError) // Add a flag to get some content var content string fs.StringVar(\u0026amp;content, \u0026#34;file.content\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;content to write to the file\u0026#34;) // Add a custom file flag file := \u0026amp;FileVar{os.Stdout} defer file.Close() fs.Var(file, \u0026#34;output.file\u0026#34;, \u0026#34;output file\u0026#34;) // Parse the command line arguments fs.Parse(os.Args[1:]) // Check if the content is empty (required) if content == \u0026#34;\u0026#34; { fs.PrintDefaults() fmt.Println(\u0026#34;error: \u0026#39;-file.content\u0026#39; is required\u0026#34;) os.Exit(1) } // Write the content to the file file.Write([]byte(content)) } Conclusion The flag package of the Golang standard library is powerful and flexible as you saw in this post. You can create custom flag types to extend the functionality of the package and adapt it to your needs. For this you should implement the Value interface of the flag package for the struct type that represents your flag type.\nI hope this post helps you to use files as a flag in your Golang CLI applications. If you have any questions or suggestions, please let me know in the comments below. 🙏\n","permalink":"https://slashdevops.com/post/2024/04/07/1/using-files-as-a-flag-in-the-golang-command-line-interface-cli/","summary":"Introduction In this post, we will see how to use files as a flag in the Golang Command Line Interface (CLI). This is useful when you want to pass a file (not the file name, but an object of file type) as a flag to your CLI application.\nUse the file name as a flag is straightforward, you can use the flag.String or flag.StringVar function to get the file name and then do the necessary checks to validate if this exist, create it, clean it, etc.","title":"Using Files as a Flag in the Golang Command Line Interface (CLI)"},{"content":"Introduction I have been using Visual Studio Code (vscode) for my C language projects until I\u0026rsquo;m re-learning this powerful programming language. One of the features that I like about vscode is the ability to build and debug C language projects using a Makefile. In this post, I will show you how to build and debug a C project in vscode using a Makefile.\nalso, I built a GitHub repository template for this project called c-library-template, where you can use it to create a new C project with the following features:\nC project structure Makefile Visual Studio Code configuration files GitHub Actions CI/CD workflow NOTES:\nThis post is based on MacOS, but you can use it on other operating systems like Linux and Windows. I\u0026rsquo;m using Homebrew to install the required tools and libraries. Prerequisites Visual Studio Code Visual Studio Code - C/C++ extension Visual Studio Code - Makefile Tools GCC, the GNU Compiler Make xcode (for MacOS users) Github CLI (optional) Install prerequisites Install Visual Studio Code Download and install Visual Studio Code manually or using the following terminal command:\n1 brew install --cask visual-studio-code Install Visual Studio Code extensions C/C++ extension C/C++ Extension Pack C/C++ Themes Makefile Tools For each extension mentioned before you can install it using the following steps:\nOpen Visual Studio Code, click on the Extensions icon on the left sidebar, search for \u0026lt;extension name\u0026gt; and click on the Install button to install the extension.\nAlso, you can install the extensions using the following terminal commands:\n1 2 3 4 code --install-extension ms-vscode.cpptools code --install-extension ms-vscode.cpptools-extension-pack code --install-extension ms-vscode.cpptools-themes code --install-extension ms-vscode.makefile-tools Install GCC, the GNU Compiler Install gcc using the following terminal command:\n1 brew install gcc Install GNU Make Install make using the following terminal command:\n1 brew install make Create a new C library project I created a new repository template called c-library-template using GitHub CLI:\n1 2 3 4 5 6 7 gh repo create slashdevops/c-library-template \\ --add-readme \\ --public \\ --description \u0026#34;C library template\u0026#34; \\ --gitignore \u0026#34;C\u0026#34; \\ --license \u0026#34;Apache-2.0\u0026#34; \\ --clone Then inside the project directory c-library-template, I created the following project structure:\n1 2 3 4 5 mkdir {.vscode,src,include,tests} touch {.gitignore,README.md,Makefile} touch {include/linkedlist.h,src/linkedlist.c,tests/main.c} touch {.vscode/launch.json,.vscode/tasks.json} touch {.vscode/c_cpp_properties.json,.vscode/settings.json} While you are in the project directory c-library-template and after running the above commands, I used the tree command, to show the project structure as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ❯ tree -I .git -a . ├── .gitignore ├── .vscode │ ├── c_cpp_properties.json │ ├── launch.json │ ├── settings.json │ └── tasks.json ├── LICENSE ├── Makefile ├── README.md ├── include │ └── linkedlist.h ├── src │ └── linkedlist.c └── tests └── main.c 5 directories, 11 files These are the files and directories that I used as a project structure.\nComplementing .gitignore file I added a extra files and directories to the c-library-template -\u0026gt; .gitignore file executing the following command:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 cat \u0026lt;\u0026lt; EOF \u0026gt;\u0026gt; .gitignore # Extra files and directories lib/ obj/ build/ html/ latex/ .DS_Store ._.DS_Store **/.DS_Store **/._.DS_Store EOF C code files The c-library-template -\u0026gt; include/linkedlist.h content is as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 #ifndef LINKEDLIST_H #define LINKEDLIST_H #include \u0026lt;stdio.h\u0026gt; typedef struct Node { // size of the data type size_t size; void *data; struct Node *next; } Node; typedef struct List { Node *head; size_t size; // used to have a reference to the last node, but // this is not a circular linked list Node *tail; } List; List *list_new(); void list_destroy(List *list); void list_node_destroy(Node *node); void list_prepend(List *list, Node *node); void list_append(List *list, Node *node); void list_prepend_value(List *list, void *value, size_t size); void list_append_value(List *list, void *value, size_t size); Node *list_pop(List *list); size_t list_size(List *list); #endif // LINKEDLIST_H The c-library-template -\u0026gt; src/linkedlist.c content is as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 #include \u0026#34;../include/linkedlist.h\u0026#34; #include \u0026lt;stdio.h\u0026gt; #include \u0026lt;stdlib.h\u0026gt; List *list_new() { List *l = (List *)malloc(sizeof(List)); l-\u0026gt;head = NULL; l-\u0026gt;tail = NULL; l-\u0026gt;size = 0; return l; } void list_destroy(List *list) { if (list-\u0026gt;head == NULL) { free(list); return; } Node *temp_node = NULL; while (list-\u0026gt;head != NULL) { temp_node = list-\u0026gt;head; list-\u0026gt;head = temp_node-\u0026gt;next; list_node_destroy(temp_node); } free(list); } void list_node_destroy(Node *node) { free(node-\u0026gt;data); free(node); } void list_prepend(List *list, Node *node) { if (list == NULL || node == NULL) { return; } // store the pointer to the first element prepend to the list // to keep track of the tail of the list if (list-\u0026gt;size == 0) { list-\u0026gt;tail = node; } node-\u0026gt;next = list-\u0026gt;head; list-\u0026gt;head = node; list-\u0026gt;size++; } void list_append(List *list, Node *node) { if (list == NULL || node == NULL) { return; } node-\u0026gt;next = NULL; // store the pointer to the first element prepend to the list // to keep track of the tail of the list if (list-\u0026gt;size == 0) { list-\u0026gt;head = node; list-\u0026gt;tail = node; } // add the node to the tail of the list list-\u0026gt;tail-\u0026gt;next = node; list-\u0026gt;tail = node; list-\u0026gt;size++; } size_t list_size(List *list) { return list-\u0026gt;size; } void list_prepend_value(List *list, void *value, size_t size) { Node *node = malloc(sizeof(Node)); node-\u0026gt;next = NULL; node-\u0026gt;data = malloc(size); node-\u0026gt;data = value; node-\u0026gt;size = size; list_prepend(list, node); } void list_append_value(List *list, void *value, size_t size) { Node *node = malloc(sizeof(Node)); node-\u0026gt;next = NULL; node-\u0026gt;data = malloc(size); node-\u0026gt;data = value; node-\u0026gt;size = size; list_append(list, node); } Node *list_pop(List *list) { if (list == NULL || list-\u0026gt;head == NULL) { return NULL; } Node *node = list-\u0026gt;head; if (list-\u0026gt;head-\u0026gt;next != NULL) { list-\u0026gt;head = list-\u0026gt;head-\u0026gt;next; list-\u0026gt;size--; } else // this is the last { list-\u0026gt;head = NULL; list-\u0026gt;tail = NULL; list-\u0026gt;size = 0; } node-\u0026gt;next = NULL; return node; } The c-library-template -\u0026gt; tests/main.c content is as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 #include \u0026#34;../include/linkedlist.h\u0026#34; #include \u0026lt;assert.h\u0026gt; #include \u0026lt;stdio.h\u0026gt; #include \u0026lt;stdlib.h\u0026gt; #include \u0026lt;string.h\u0026gt; void test_list_new() { List *list = list_new(); assert(list-\u0026gt;size == 0); assert(list-\u0026gt;head == NULL); assert(list-\u0026gt;tail == NULL); list_destroy(list); } void test_list_size_new() { List *list = list_new(); assert(list_size(list) == 0); list_destroy(list); } void test_prepend_to_new_list() { List *list = list_new(); Node *node = malloc(sizeof(Node)); list_prepend(list, node); assert(list_size(list) == 1); list_destroy(list); } void test_prepend_10() { List *list = list_new(); for (int i = 0; i \u0026lt; 10; i++) { Node *node = malloc(sizeof(Node)); node-\u0026gt;next = NULL; node-\u0026gt;size = sizeof(int); node-\u0026gt;data = malloc(sizeof(Node)); memcpy(node-\u0026gt;data, \u0026amp;i, sizeof(int)); list_prepend(list, node); } assert(list_size(list) == 10); Node *node = list-\u0026gt;head; for (int i = 0; i \u0026lt; 10; i++) { assert(node-\u0026gt;size == sizeof(int)); int *val = (int *)node-\u0026gt;data; // printf(\u0026#34;node value = %d, i= %d\\n\u0026#34;, *val, (9 - i)); assert(*val == (9 - i)); node = node-\u0026gt;next; } list_destroy(list); } void test_append_10() { List *list = list_new(); for (int i = 0; i \u0026lt; 10; i++) { Node *node = malloc(sizeof(Node)); node-\u0026gt;next = NULL; node-\u0026gt;size = sizeof(int); node-\u0026gt;data = malloc(sizeof(Node)); memcpy(node-\u0026gt;data, \u0026amp;i, sizeof(int)); list_append(list, node); } assert(list_size(list) == 10); Node *node = list-\u0026gt;head; for (int i = 0; i \u0026lt; 10; i++) { assert(node-\u0026gt;size == sizeof(int)); int *val = (int *)node-\u0026gt;data; // printf(\u0026#34;node value = %d, i= %d\\n\u0026#34;, *val, i); assert(*val == i); node = node-\u0026gt;next; } list_destroy(list); } void test_list_destroy_10() { List *list = list_new(); for (int i = 0; i \u0026lt; 10; i++) { Node *node = malloc(sizeof(Node)); node-\u0026gt;data = NULL; node-\u0026gt;next = NULL; list_prepend(list, node); } assert(list_size(list) == 10); list_destroy(list); } void test_list_prepend_value() { List *list = list_new(); for (int i = 10; i \u0026gt; 0; i--) { int *val = malloc(sizeof(int)); *val = i; list_prepend_value(list, val, sizeof(int)); } assert(list_size(list) == 10); // check elemets in the list Node *temp_node = list-\u0026gt;head; for (int i = 0; i \u0026lt; 10; i++) { int *val = (int *)temp_node-\u0026gt;data; // printf(\u0026#34;value = %d, size = %zu (bytes)\\n\u0026#34;, *val, temp_node-\u0026gt;size); assert(*val == i + 1); assert(sizeof(int) == temp_node-\u0026gt;size); temp_node = temp_node-\u0026gt;next; } list_destroy(list); } void test_list_append_value() { List *list = list_new(); for (int i = 10; i \u0026gt; 0; i--) { int *val = malloc(sizeof(int)); *val = i; list_append_value(list, val, sizeof(int)); } assert(list_size(list) == 10); // check elements in the list Node *temp_node = list-\u0026gt;head; for (int i = 0; i \u0026lt; 10; i++) { int *val = (int *)temp_node-\u0026gt;data; // printf(\u0026#34;value = %d, size = %zu (bytes)\\n\u0026#34;, *val, temp_node-\u0026gt;size); assert(*val == 10 - i); assert(sizeof(int) == temp_node-\u0026gt;size); temp_node = temp_node-\u0026gt;next; } list_destroy(list); } void test_list_pop_all() { List *list = list_new(); for (int i = 0; i \u0026lt; 10; i++) { Node *node = malloc(sizeof(Node)); node-\u0026gt;data = malloc(sizeof(int)); memcpy(node-\u0026gt;data, \u0026amp;i, sizeof(int)); node-\u0026gt;next = NULL; node-\u0026gt;size = sizeof(int); list_prepend(list, node); } assert(list_size(list) == 10); for (int i = 0; i \u0026lt; 10; i++) { Node *pop_node = list_pop(list); assert(pop_node != NULL); // int *val = (int *)pop_node-\u0026gt;data; // printf(\u0026#34;value: %d, size: %zu\\n\u0026#34;, *val, pop_node-\u0026gt;size); list_node_destroy(pop_node); } list_destroy(list); } void tests_run_all(void) { test_list_new(); test_list_size_new(); test_prepend_to_new_list(); test_prepend_10(); test_append_10(); test_list_destroy_10(); test_list_prepend_value(); test_list_append_value(); test_list_pop_all(); } int main(void) { tests_run_all(); } Makefile This Makefile has a help command that you can use to display the available commands.\nYou can use the following command to display the help command:\n1 make help The c-library-template -\u0026gt; Makefile content is as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 .DELETE_ON_ERROR: clean # Where to find tools TEST_APP = test_linkedlist TARGET_LIB = linkedlistlib.so CC_MACOS ?= /opt/homebrew/bin/gcc-13 CC_LINUX ?= /usr/bin/gcc AR_MACOS ?= /opt/homebrew/bin/gcc-ar-13 AR_LINUX ?= /usr/bin/ar MEMCHECK_MACOS ?= /usr/bin/leaks MEMCHECK_LINUX ?= /usr/bin/valgrind # Determine OS UNAME_S := $(shell uname -s) ifeq ($(UNAME_S),Darwin) MEMCHECK = $(MEMCHECK_MACOS) MEMCHECK_ARGS = --atExit -- CC = $(CC_MACOS) AR = $(AR_MACOS) endif ifeq ($(UNAME_S),Linux) MEMCHECK = $(MEMCHECK_LINUX) MEMCHECK_ARGS = CC = $(CC_LINUX) AR = $(AR_LINUX) endif # Check if OS is supported ifneq ($(UNAME_S),Darwin) ifneq ($(UNAME_S),Linux) $(error \u0026#34;Unsupported OS\u0026#34;) endif endif # Check if executables are in PATH EXECUTABLES = $(CC) $(AR) $(MEMCHECK) K := $(foreach exec,$(EXECUTABLES),\\ $(if $(shell which $(exec)),some string,$(error \u0026#34;No $(exec) in PATH))) # Compiler and linker flags CFLAGS = -Wall -Wextra -Werror -Wunused -O2 -g -std=c2x -pedantic # Compiler flags LDFLAGS = -shared # Linker flags (shared library) (change to -static for static library) SRC_DIR := src OBJ_DIR := obj LIB_DIR := lib TEST_SRC_DIR := tests TEST_OBJ_DIR := obj BUILD_DIR := build SRC_FILES = $(wildcard $(SRC_DIR)/*.c) OBJ_FILES = $(SRC_FILES:$(SRC_DIR)/%.c=$(OBJ_DIR)/%.o) TEST_FILES = $(wildcard $(TEST_SRC_DIR)/*.c) TEST_OBJS = $(TEST_FILES:$(TEST_SRC_DIR)/%.c=$(TEST_OBJ_DIR)/%.o) INCLUDE_DIRS = -Iinclude # Targets ##@ Default target .PHONY: all all: clean build ## Clean and build the library ##@ Build commands .PHONY: clean build build: $(TARGET_LIB) ## Clean and build the library $(TARGET_LIB): $(OBJ_FILES) | $(LIB_DIR) $(CC) $(LDFLAGS) -o $(LIB_DIR)/$@ $^ $(OBJ_DIR)/%.o: $(SRC_DIR)/%.c | $(OBJ_DIR) $(CC) $(CFLAGS) $(INCLUDE_DIRS) -c -o $@ $\u0026lt; $(TEST_APP): $(TEST_OBJS) $(LIB_DIR)/$(TARGET_LIB) | $(BUILD_DIR) $(CC) $(CFLAGS) $(INCLUDE_DIRS) -o $(BUILD_DIR)/$@ $^ $(TEST_OBJ_DIR)/%.o: $(TEST_SRC_DIR)/%.c | $(TEST_OBJ_DIR) $(CC) $(CFLAGS) $(INCLUDE_DIRS) -c -o $@ $\u0026lt; $(BUILD_DIR): @mkdir -p $(BUILD_DIR) $(OBJ_DIR): @mkdir -p $(OBJ_DIR) $(LIB_DIR): @mkdir -p $(LIB_DIR) ##@ Test commands .PHONY: test test: clean build $(TEST_APP) ## Run tests @echo \u0026#34;Running tests...\u0026#34; ./$(BUILD_DIR)/$(TEST_APP) .PHONY: memcheck memcheck: test ## Run tests and check for memory leaks @echo \u0026#34;Running tests with memory check...\u0026#34; $(MEMCHECK) $(MEMCHECK_ARGS) ./$(BUILD_DIR)/$(TEST_APP) ##@ Clean commands .PHONY: clean clean: ## Clean built artifacts @rm -rf $(BUILD_DIR) @rm -rf $(OBJ_DIR) @rm -rf $(LIB_DIR) ##@ Help commands .PHONY: help help: ## Display this help @awk \u0026#39;BEGIN {FS = \u0026#34;:.*##\u0026#34;; \\ printf \u0026#34;Usage: make \\033[36m\u0026lt;target\u0026gt;\\033[0m\\n\u0026#34;} /^[a-zA-Z_-]+:.*?##/ \\ { printf \u0026#34; \\033[36m%-10s\\033[0m %s\\n\u0026#34;, $$1, $$2 } /^##@/ \\ { printf \u0026#34;\\n\\033[1m%s\\033[0m\\n\u0026#34;, substr($$0, 5) } \u0026#39; \\ $(MAKEFILE_LIST) The Makefile has the following targets:\n1 make help Visual Studio Code configuration files The c-library-template -\u0026gt; .vscode/c_cpp_properties.json content is as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 { \u0026#34;configurations\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;macos-gcc-arm64\u0026#34;, \u0026#34;compilerPath\u0026#34;: \u0026#34;/opt/homebrew/bin/gcc-13\u0026#34;, \u0026#34;intelliSenseMode\u0026#34;: \u0026#34;macos-gcc-arm64\u0026#34;, \u0026#34;includePath\u0026#34;: [ \u0026#34;${workspaceFolder}/**\u0026#34;, \u0026#34;${workspaceFolder}/include/**\u0026#34;, \u0026#34;/opt/homebrew/lib/gcc/13/**\u0026#34; ], \u0026#34;defines\u0026#34;: [], \u0026#34;macFrameworkPath\u0026#34;: [ \u0026#34;${workspaceFolder}/**\u0026#34;, \u0026#34;/System/Library/Frameworks\u0026#34; ], \u0026#34;cStandard\u0026#34;: \u0026#34;c23\u0026#34;, \u0026#34;cppStandard\u0026#34;: \u0026#34;c++23\u0026#34;, \u0026#34;configurationProvider\u0026#34;: \u0026#34;ms-vscode.makefile-tools\u0026#34;, \u0026#34;browse\u0026#34;: { \u0026#34;path\u0026#34;: [ \u0026#34;${workspaceFolder}/**\u0026#34;, \u0026#34;${workspaceFolder}/include/**\u0026#34;, \u0026#34;/opt/homebrew/lib/gcc/13/**\u0026#34; ] } } ], \u0026#34;version\u0026#34;: 4 } The c-library-template -\u0026gt; .vscode/launch.json content is as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 { \u0026#34;version\u0026#34;: \u0026#34;2.0.0\u0026#34;, \u0026#34;configurations\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;C Debug -\u0026gt; linkedlist Makefile\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;cppdbg\u0026#34;, \u0026#34;request\u0026#34;: \u0026#34;launch\u0026#34;, \u0026#34;program\u0026#34;: \u0026#34;${workspaceFolder}/build/test_linkedlist\u0026#34;, \u0026#34;args\u0026#34;: [], \u0026#34;stopAtEntry\u0026#34;: true, \u0026#34;cwd\u0026#34;: \u0026#34;${workspaceFolder}\u0026#34;, \u0026#34;environment\u0026#34;: [], \u0026#34;externalConsole\u0026#34;: false, \u0026#34;MIMode\u0026#34;: \u0026#34;lldb\u0026#34;, \u0026#34;preLaunchTask\u0026#34;: \u0026#34;make\u0026#34; } ] } The c-library-template -\u0026gt; .vscode/tasks.json content is as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 { \u0026#34;version\u0026#34;: \u0026#34;2.0.0\u0026#34;, \u0026#34;tasks\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;cppbuild\u0026#34;, \u0026#34;label\u0026#34;: \u0026#34;make\u0026#34;, \u0026#34;command\u0026#34;: \u0026#34;make \u0026amp;\u0026amp; make test\u0026#34;, \u0026#34;args\u0026#34;: [], \u0026#34;options\u0026#34;: { \u0026#34;cwd\u0026#34;: \u0026#34;${workspaceFolder}\u0026#34; }, \u0026#34;problemMatcher\u0026#34;: [ \u0026#34;$gcc\u0026#34; ], \u0026#34;group\u0026#34;: \u0026#34;build\u0026#34;, \u0026#34;detail\u0026#34;: \u0026#34;Build our program using make\u0026#34; } ], } The c-library-template -\u0026gt; .vscode/settings.json content is as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 { \u0026#34;C_Cpp.errorSquiggles\u0026#34;: \u0026#34;enabled\u0026#34;, \u0026#34;C_Cpp.enhancedColorization\u0026#34;: \u0026#34;enabled\u0026#34;, \u0026#34;C_Cpp.intelliSenseEngine\u0026#34;: \u0026#34;Tag Parser\u0026#34;, \u0026#34;files.associations\u0026#34;: { \u0026#34;*.template\u0026#34;: \u0026#34;yaml\u0026#34;, \u0026#34;cstdlib\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;__hash_table\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;__split_buffer\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;array\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;bitset\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;deque\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;initializer_list\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;queue\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;span\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;stack\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;string\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;string_view\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;unordered_map\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;vector\u0026#34;: \u0026#34;c\u0026#34;, \u0026#34;format\u0026#34;: \u0026#34;c\u0026#34; }, } Build and Debug the C project Open the project directory c-library-template in Visual Studio Code: 1 2 cd c-library-template/ code . Build the project using the make command: 1 make Run the tests using the make test command: 1 2 3 4 5 # Run the tests make test # Run the tests and check for memory leaks make memcheck Debug the project using the make task: Set your breakpoints in the source code. Press Cmd + Shift + P to open the command palette. Type C/C++: Debug and select C/C++: Debug C/C++ File. Select the C Debug -\u0026gt; linkedlist Makefile task to build and start debugging the project. Your browser does not support the video tag. Github Actions CI/CD Workflow I created a GitHub Actions CI/CD workflow to build and test the project on every push to the main branch.\nThe c-library-template -\u0026gt; .github/workflows/ci.yml content is as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 name: C CI on: push: branches: [ \u0026#34;main\u0026#34; ] pull_request: branches: [ \u0026#34;main\u0026#34; ] workflow_dispatch: jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Install dependencies run: | sudo apt install software-properties-common -y sudo add-apt-repository ppa:ubuntu-toolchain-r/test -y sudo apt-get update -y sudo apt-get install -y gcc-13 valgrind - name: Set up gcc-13 run: | sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-13 90 - name: Check versions run: | gcc --version make --version - name: make build run: make build - name: make test run: make test - name: make memcheck run: make memcheck build: runs-on: ubuntu-latest needs: test if: github.event_name == \u0026#39;push\u0026#39; \u0026amp;\u0026amp; github.ref == \u0026#39;refs/heads/main\u0026#39; steps: - uses: actions/checkout@v4 - name: Install dependencies run: | sudo apt install software-properties-common -y sudo add-apt-repository ppa:ubuntu-toolchain-r/test -y sudo apt-get update -y sudo apt-get install -y gcc-13 valgrind - name: Set up gcc-13 run: | sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-13 90 - name: Check versions run: | gcc --version make --version - name: make build run: make build Conclusion When you are (re-)learning a new programming language, it is important to have a good project structure and tools to help you build and debug your projects. I always spend time finding the way to make my life easier.\nI hope this post helps you to build and debug your C projects using Visual Studio Code with a Makefile.\nLeave a comment if you have any questions or suggestions.\n","permalink":"https://slashdevops.com/post/2024/03/31/2/building-and-debugging-a-c-project-in-visual-studio-code-with-a-makefile/","summary":"Introduction I have been using Visual Studio Code (vscode) for my C language projects until I\u0026rsquo;m re-learning this powerful programming language. One of the features that I like about vscode is the ability to build and debug C language projects using a Makefile. In this post, I will show you how to build and debug a C project in vscode using a Makefile.\nalso, I built a GitHub repository template for this project called c-library-template, where you can use it to create a new C project with the following features:","title":"Building and Debugging a C Project in Visual Studio Code with a Makefile"},{"content":"Introduction I have been using WordPress for a long time to host slashdevops.com blog site. I had been looking for a way to migrate this blog from WordPress to a static site generator and during the research I found Hugo and I can say I am very happy with it. Hugo allows me to write this blog posts in Markdown and it is very fast and easy to use.\nI have also been using GitHub Pages for other of my personal jobs and I decided to migrate this blog GitHub Pages as well.\nTo perform this migration I used the following software:\nHugo GitHub Pages GitHub Actions GitHub CLI Google Domains Disqus Fuse.js Hugo Theme -\u0026gt; PaperMod, Thanks to Aditya Telange for this theme 🙏. My Experience It took me some time to migrate this blog from WordPress to Hugo and GitHub Pages. I had to learn how to use Hugo, GitHub Pages and how to configure my slashdevops.com domain in Google Domains but it was worth it.\nNow, the blog implement Disqus for comments and likes and it is very fast and responsive. Also, the blog implement fuzzy search using Fuse.js. I decided to lost the comments and likes from the WordPress posts, I\u0026rsquo;m very sorry about that.\n👉 The most hard part was understand how should work the GitHub Pages and Hugo together.\nThere are several tutorials on the internet that can help you with this migration, but not even the official documentation of Hugo and GitHub Pages helped me to understand the fact that one of the best way is implementing two repositories to have this done - one repository to host the source code of the blog (the hugo repository, where you create the Markdown files) and another repository to host the compiled code of the blog (static files generated from hugo and in the public folder of the first repo).\nJust to clarify, the source code repository is the one you use to write your blog posts and the compiled code repository is the one you use to host the blog.\nYes, truth, there are several ways to have this done, there are many blog posts and tutorials on the internet that can help you with this configuration of Github Pages and Hugo, the majority of these use two repositories approach with git submodules or override the docs folder of the same repository. Also, you can use a different branch in the same repository to store the static files generated from hugo, the last one is use the same repository to store the source code and the static files and deploy the static files to the Github Pages using a Workflow.\nI decided to use the two repositories approach without git submodules implementation because it is easier to maintain, understand and has me the ability to keep the hugo repository (source code) private.\nThe Good 👍 Hugo is very fast and \u0026ldquo;easy to use\u0026rdquo; (ones you understand its philosophy). Hugo allows me to write this blog posts in Markdown. Hugo has a lot of themes to choose from. GitHub Pages could be free and \u0026ldquo;easy to use\u0026rdquo;. The blog is very fast and responsive. The blog implement fuzzy search using Fuse.js thanks to the Hugo Theme PaperMod implementation/integration. The blog is very easy to maintain and update. The Bad 👎 Migrating from WordPress to Hugo was not easy. I made it manually and it took me some time, fortunately I didn\u0026rsquo;t have many posts. At the beginning, I had some issues with the Hugo theme I chose (PaperMod), but I was able to fix them after understanding how Hugo works. I lost the comments and likes from the WordPress posts, I\u0026rsquo;m very sorry about that. I take some time to have the right configuration of the GitHub Pages to use a public repository to host the blog (static files) and a private repository to host the source code of the blog (hugo files) and the magic to do that was the Github Actions Workflow I made. Not so Bad / Not so Good 🫤 I had to learn how to use Hugo and GitHub Pages, it takes some time, but it was worth it. To have GitHub Pages for free, I had to use a public repository -\u0026gt; slashdevops.github.io, but I am happy with it. This means you are seeing the code of this blog and how it is built (static files). Fortunately, I can have a private repository to host the source code of the blog (hugo implementation). Relevant steps This is not a detailed guide step by step, because I made this migration in a series of back and forth steps until this worked, but I want to share with you the most relevant steps I took to have this blog working properly.\nThe steps I\u0026rsquo;m sharing here are not necessary in the order I did them, and some of them are interdependent, but I will try to explain them in the order I think is more relevant.\nNOTES:\nYou should have installed and configured GitHub CLI to perform some of these steps. You should have installed and configured Hugo to perform some of these steps. 1. Repository Configuration As I explained before, to have this blog working properly I needed two two repositories.\nslashdevops.github.io: The repository to host the compiled code of the blog (static files coming from Hugo folder public). hugo-slashdevops.github.io: The repository to host the source code of the blog. 👉 I create these using the GitHub CLI, but you can create them using the GitHub web interface as well.\n1 2 3 4 5 # Create the repository to host the compiled code of the blog gh repo create slashdevops/slashdevops.github.io --public --description \u0026#34;slashdevops.com blog site\u0026#34; --clone --gitignore \u0026#34;html\u0026#34; # Create the repository to host the source code of the blog gh repo create slashdevops/hugo-slashdevops.github.io --private --description \u0026#34;slashdevops.com blog site source code\u0026#34; --clone So, for my migration I used the following repositories:\n1.1 slashdevops.github.io repository Repository slashdevops/slashdevops.github.io is public and it is used to host the compiled code of the blog (static files coming from Hugo folder public).\nThis should be filled by a GitHub Actions -\u0026gt; Workflow that will be in the repository hugo-slashdevops.github.io and used to build the blog and push the static files generated in the public folder to this repository.\nThis repository should not be empty to enable the GitHub Pages feature. So, I created a index.html file with a minimal content to avoid the repository being empty and to test the Github Pages configuration.\n👉 The file I added to the repository slashdevops/slashdevops.github.io in the beginning was:\n1 2 3 4 5 6 7 8 9 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;slashdevops.com blog page\u0026lt;/title\u0026gt; \u0026lt;/html\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1\u0026gt;Welcome to the home page\u0026lt;/h1\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; 👉 And the configuration of this repository in the GitHub is:\n👉 And after push the minimal html file to the repository, and configure your domain DNS as it is explained in the step 2 below, you can see the blog site working properly.\n1.2 hugo-slashdevops.github.io repository Repository slashdevops/hugo-slashdevops.github.io is private and it is used to host the source code of the blog. Basically, this repository contains the Hugo configuration, the Markdown files of the blog posts and the Github Action -\u0026gt; Workflow to build and deploy the static files into the repository slashdevops.github.io.\nI used the .github/workflows/hugo.yaml configuration file recommended in Hugo - Host on GitHub Pages \u0026ndash; and then I modified it to fit my needs, which is to have two repositories, one for the source code and another for the static files. \u0026ndash;. This modification allows me to build the blog and deploy the static files to the repository slashdevops.github.io using git commands.\nIMPORTANT: I needed to configure the Github Actions -\u0026gt; Workflow to use the GITHUB_TOKEN with the right permissions to allow the deployment of the static files to the repository slashdevops.github.io. I named it ACTIONS_GITHUB_TOKEN (see the line 81 of the code below).\n👉 The content of the secret ACTIONS_GITHUB_TOKEN was created in the repository slashdevops/hugo-slashdevops.github.io in the Settings -\u0026gt; Secrets section of the repository and I used the GitHub CLI to create it.\n1 2 3 4 5 # Create the token gh auth token # Set the token in the repository slashdevops/hugo-slashdevops.github.io gh secret set ACTIONS_GITHUB_TOKEN -r slashdevops/hugo-slashdevops.github.io -b \u0026lt;token generated before\u0026gt; 👉 The code of the Github Action -\u0026gt; Workflow I used to build and deploy the blog is:\nSource: slashdevops/hugo-slashdevops.github.io -\u0026gt; .github/workflows/hugo.yml private repository.\nNOTES:\nThis is a private repository, so you can\u0026rsquo;t see the code, but I will show you the code here. Highlighted lines are the most important lines in the code. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 # Sample workflow for building and deploying a Hugo site to GitHub Pages name: Deploy Hugo site to Pages on: # Runs on pushes targeting the default branch push: branches: [\u0026#34;main\u0026#34;] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: # Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages permissions: contents: read id-token: write # Default to bash defaults: run: shell: bash jobs: # Build job build: runs-on: ubuntu-latest env: SITE_URL: \u0026#34;https://slashdevops.com\u0026#34; HUGO_VERSION: 0.124.1 steps: - name: Install Hugo CLI run: | wget -O ${{ runner.temp }}/hugo.deb https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-amd64.deb \\ \u0026amp;\u0026amp; sudo dpkg -i ${{ runner.temp }}/hugo.deb - name: Install Dart Sass run: sudo snap install dart-sass - name: Checkout uses: actions/checkout@v4 with: submodules: recursive fetch-depth: 1 - name: Build with Hugo env: # For maximum backward compatibility with Hugo modules HUGO_ENVIRONMENT: production HUGO_ENV: production run: | hugo \\ --gc \\ --minify \\ --baseURL \u0026#34;${{ env.SITE_URL }}/\u0026#34; - name: Show files run: tree -I \u0026#39;.git\u0026#39; -a - name: Upload artifact uses: actions/upload-artifact@v4 with: name: static-files path: ./public retention-days: 1 overwrite: true # Deployment job deploy: env: STATIC_SITE_REPO: \u0026#34;slashdevops/slashdevops.github.io\u0026#34; STATIC_SITE_REPO_BRANCH: main runs-on: ubuntu-latest needs: build steps: - name: Checkout uses: actions/checkout@v4 with: repository: ${{ env.STATIC_SITE_REPO }} ref: ${{ env.STATIC_SITE_REPO_BRANCH }} token: ${{ secrets.ACTIONS_GITHUB_TOKEN }} - name: Download artifact uses: actions/download-artifact@v4 with: name: static-files path: . - name: Show files run: tree -I \u0026#39;.git\u0026#39; -a - name: Commit changes run: | git config --local user.name \u0026#34;GitHub Actions from hugo-slashdevops.github.io repo\u0026#34; git config --local user.email \u0026#34;[email protected]\u0026#34; git add . git commit -m \u0026#34;Deploy site\u0026#34; git push --force origin ${{ env.STATIC_SITE_REPO_BRANCH }} 2. Google Domains Configuration I made the validation of the domain slashdevops.com on my GitHub Settings account and then I configured the domain in Google Domains to point to the GitHub Pages servers (list of IPs in Configuring an apex domain).\nNOTES:\nThis step is not necessary if you are using the GitHub Pages domain, but it is recommended for security reasons. 👉 This image shows how I configure the verified domains in GitHub Settings:\n👉 After the Validation of the domain, I added the domain to the GitHub Pages configuration in the repository slashdevops.github.io settings.\nThen, I configured the domain in Google Domains to point to the GitHub Pages servers. This configuration is done in the DNS section of the domain configuration in Google Domains.\n👉 This image shows how I configured the domain in Google Domains:\nReferences:\nhttps://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site#configuring-an-apex-domain 3. Hugo Configuration Here there are some relevant configurations I made, let me enumerate them:\nThe hugo.toml configuration file. The hugo static files content. You will need to put at least the CNAME file in the static folder to configure the domain in GitHub Pages. Enabling messages and likes with Disqus. 👉 1. My hugo configuration is very simple, I used the Hugo Theme -\u0026gt; PaperMod Wiki file as template and I made some changes to fit my needs.\nMy hugo.toml configuration files is:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 baseURL = \u0026#34;https://slashdevops.com/\u0026#34; languageCode = \u0026#34;en-us\u0026#34; title = \u0026#34;SlashDevOps\u0026#34; theme = \u0026#34;PaperMod\u0026#34; # Read: https://github.com/adityatelange/hugo-PaperMod/wiki/FAQs#using-hugos-syntax-highlighter-chroma # https://bwaycer.github.io/hugo_tutorial.hugo/extras/highlighting/ pygmentsUseClasses = true ############################################################################### # metadata enableRobotsTXT = true buildDrafts = false buildFuture = false buildExpired = false [minify] disableXML = true minifyOutput = true ############################################################################### # menu [[menu.main]] identifier = \u0026#34;categories\u0026#34; name = \u0026#34;Categories\u0026#34; url = \u0026#34;/categories/\u0026#34; weight = 10 [[menu.main]] identifier = \u0026#34;GitHub\u0026#34; name = \u0026#34;GitHub\u0026#34; url = \u0026#34;https://github.com/slashdevops\u0026#34; weight = 10 [[menu.main]] identifier = \u0026#34;tags\u0026#34; name = \u0026#34;Tags\u0026#34; url = \u0026#34;/tags/\u0026#34; weight = 20 [[menu.main]] identifier = \u0026#34;blog\u0026#34; name = \u0026#34;Blog\u0026#34; url = \u0026#34;/post/\u0026#34; weight = 20 [[menu.main]] identifier = \u0026#34;search\u0026#34; name = \u0026#34;Search\u0026#34; url = \u0026#34;/search\u0026#34; weight = 20 [[menu.main]] identifier = \u0026#34;archives\u0026#34; name = \u0026#34;Archives\u0026#34; url = \u0026#34;/archives\u0026#34; weight = 20 ############################################################################### # params [params] env = \u0026#34;production\u0026#34; title = \u0026#34;SlashDevOps\u0026#34; description = \u0026#34;SlashDevOps is a blog about DevOps, Cloud, Containers, Kubernetes, CI/CD, Automation, Infrastructure as Code, and more.\u0026#34; keywords = [ \u0026#34;DevOps\u0026#34;, \u0026#34;Cloud\u0026#34;, \u0026#34;Containers\u0026#34;, \u0026#34;Kubernetes\u0026#34;, \u0026#34;CI/CD\u0026#34;, \u0026#34;Automation\u0026#34;, \u0026#34;Infrastructure as Code\u0026#34;, ] author = \u0026#34;Christian González Di Antonio\u0026#34; defaultTheme = \u0026#34;auto\u0026#34; # auto | light | dark disableThemeToggle = false ShowReadingTime = true ShowShareButtons = true ShowPostNavLinks = true ShowBreadCrumbs = true ShowCodeCopyButtons = true ShowWordCount = true ShowRssButtonInSectionTermList = true UseHugoToc = true disableSpecial1stPost = false disableScrollToTop = false comments = true hidemeta = false hideSummary = false showtoc = true tocopen = true searchHidden = false # logo [params.label] text = \u0026#34;slashdevops.com\u0026#34; icon = \u0026#34;/safari-pinned-tab.svg\u0026#34; iconHeight = 45 # profile mode [params.profileMode] enabled = false # home page [params.homeInfoParams] Title = \u0026#34;SlashDevops\u0026#39;s Blog\u0026#34; Content = \u0026#34;👉 Welcome to the SlashDevOps blog. Here you will find articles about DevOps, Cloud, Containers, Kubernetes, CI/CD, Automation, Infrastructure as Code, and more.\u0026#34; # social icons [[params.socialIcons]] name = \u0026#34;x\u0026#34; url = \u0026#34;https://x.com/slashdevops\u0026#34; [[params.socialIcons]] name = \u0026#34;github\u0026#34; url = \u0026#34;https://github.com/slashdevops\u0026#34; [params.assets] disableHLJS = true disableFingerprinting = false # for search # https://fusejs.io/api/options.html [params.fuseOpts] isCaseSensitive = false shouldSort = true location = 0 distance = 100 threshold = 0.4 minMatchCharLength = 0 limit = 10 # refer: https://www.fusejs.io/api/methods.html#search keys = [ \u0026#34;title\u0026#34;, \u0026#34;permalink\u0026#34;, \u0026#34;summary\u0026#34;, \u0026#34;content\u0026#34;, ] [params.editPost] URL = \u0026#34;https://github.com/slashdevops/slashdevops.github.io/tree/main/content/\u0026#34; Text = \u0026#34;Edit this post on GitHub\u0026#34; appendFilePath = true ############################################################################### # services [services] [services.disqus] shortname = \u0026#34;https-slashdevops-com\u0026#34; ############################################################################### # markup [markup.highlight] anchorLineNos = true codeFences = true guessSyntax = false lineAnchors = \u0026#39;\u0026#39; lineNoStart = 1 lineNos = true lineNumbersInTable = true noClasses = false noHl = false style = \u0026#34;monokai\u0026#34; tabWidth = 4 ############################################################################### # outputs [outputs] home = [ \u0026#34;HTML\u0026#34;, \u0026#34;RSS\u0026#34;, \u0026#34;JSON\u0026#34;, ] 👉 2. My hugo static folder content:\n1 2 3 4 5 6 7 8 9 ├── static │ ├── Ads.txt │ ├── CNAME \u0026lt;- this is very important to configure the domain in GitHub Pages │ ├── README.md │ ├── apple-touch-icon.png │ ├── favicon-16x16.png │ ├── favicon-32x32.png │ ├── favicon.ico │ └── safari-pinned-tab.svg The content of the CNAME file is:\nsource: slashdevops/hugo-slashdevops.github.io -\u0026gt; static/CNAME private repository.\n1 slashdevops.com NOTES:\nEnsure the CNAME file contains the domain you want to use and must be match with the hugo configuration baseURL in the hugo.toml file and with the domain configuration in Google Domains and with the domain configuration in GitHub Pages. This file is located at the private repository hugo-slashdevops.github.io in the static folder and will be deployed to the root of the public repository slashdevops.github.io when the GitHub Actions -\u0026gt; Workflow runs. To understand why this is necessary read CNAME errors 👉 3. Enabling messages and likes with Disqus\nTo have messages and likes in the blog posts I used Disqus. To configure it, you need to create an account in Disqus and then create a new site in the Disqus configuration. After that, you will have a shortname that you will use in the hugo.toml configuration file.\nTo see the configuration in the hugo.toml file, see the hugo.toml configuration file above.\nThen I override the themes/PaperMods//layouts/partials/comments.html creating a comments.html inside the layouts/partials folder of my hugo-slashdevops.github.io repository.\nThis is necessary because you need to have a comments.html with the configuration of the hugo -\u0026gt; Disqus I found in the Hugo -\u0026gt; Disqus -\u0026gt; source code the reference is here Hugo - Disqus.\nsource: slashdevops/hugo-slashdevops.github.io -\u0026gt; layouts/partials/comments.html private repository.\n1 2 3 ├── layouts │ └── partials │ └── comments.html \u0026lt;- this is very important to configure the comments in the blog posts The content of the comments.html file is:\nsource: Hugo -\u0026gt; Disqus -\u0026gt; source code\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 {{- $pc := .Site.Config.Privacy.Disqus -}} {{- if not $pc.Disable -}} {{ if .Site.Config.Services.Disqus.Shortname }}\u0026lt;div id=\u0026#34;disqus_thread\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; \u0026lt;script\u0026gt; window.disqus_config = function () { {{with .Params.disqus_identifier }}this.page.identifier = \u0026#39;{{ . }}\u0026#39;;{{end}} {{with .Params.disqus_title }}this.page.title = \u0026#39;{{ . }}\u0026#39;;{{end}} {{with .Params.disqus_url }}this.page.url = \u0026#39;{{ . | html }}\u0026#39;;{{end}} }; (function() { if ([\u0026#34;localhost\u0026#34;, \u0026#34;127.0.0.1\u0026#34;].indexOf(window.location.hostname) != -1) { document.getElementById(\u0026#39;disqus_thread\u0026#39;).innerHTML = \u0026#39;Disqus comments not available by default when the website is previewed locally.\u0026#39;; return; } var d = document, s = d.createElement(\u0026#39;script\u0026#39;); s.async = true; s.src = \u0026#39;//\u0026#39; + {{ .Site.Config.Services.Disqus.Shortname }} + \u0026#39;.disqus.com/embed.js\u0026#39;; s.setAttribute(\u0026#39;data-timestamp\u0026#39;, +new Date()); (d.head || d.body).appendChild(s); })(); \u0026lt;/script\u0026gt; \u0026lt;noscript\u0026gt;Please enable JavaScript to view the \u0026lt;a href=\u0026#34;https://disqus.com/?ref_noscript\u0026#34;\u0026gt;comments powered by Disqus.\u0026lt;/a\u0026gt;\u0026lt;/noscript\u0026gt; \u0026lt;a href=\u0026#34;https://disqus.com\u0026#34; class=\u0026#34;dsq-brlink\u0026#34;\u0026gt;comments powered by \u0026lt;span class=\u0026#34;logo-disqus\u0026#34;\u0026gt;Disqus\u0026lt;/span\u0026gt;\u0026lt;/a\u0026gt;{{end}} {{- end -}} Conclusion I am very happy with the migration from WordPress to Hugo and GitHub Pages. I can say that I am very satisfied with the result and I recommend this migration to anyone who wants to have a fast, responsive, and easy to maintain blog.\n","permalink":"https://slashdevops.com/post/2024/03/31/1/migrated-from-wordpress-to-hugo-and-github-pages/","summary":"Introduction I have been using WordPress for a long time to host slashdevops.com blog site. I had been looking for a way to migrate this blog from WordPress to a static site generator and during the research I found Hugo and I can say I am very happy with it. Hugo allows me to write this blog posts in Markdown and it is very fast and easy to use.\nI have also been using GitHub Pages for other of my personal jobs and I decided to migrate this blog GitHub Pages as well.","title":"Migrated from WordPress to Hugo and GitHub Pages"},{"content":"The Problem Surely and like me, you are trying to be more secure when connecting Jenkins with your AWS Accounts assuming a role. If you are asking What is that? , please read this: https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html\nOf course, there are many different options to use, but the problem always surrounds us, if you use a plugin then the maintainability and security when talking about Jenkins plugins for sure decrease.\nI particularly hate Jenkins, from my point of view this is an obsolete tool trying to survive in the modern world, and if you are concerned about security (and maintainability) sure understand my point.\nSo, why I’m writing about that? Because unfortunately I still using Jenkins and sweating their maintenance Because as a rule of thumb I try to avoid plugins that don’t have any release in the time windows of 6-12 months Helps others to avoid loose time and security when needs the same that me, AWS cross-account connections using Jenkins assuming a role Because at least if I have an Issue this is my code and I can fix it What’s this? This is a guide and code for somebody using Jenkins shared library This is a minimal blog entry to help someone that understands Jenkins and Groovy This could help you if you are using Jenkins + Jenkins shared library + AWS cross-account and cross-region roles what it is not? A tutorial A very well-explained and step-by-step guide Something you surely need to use An AWS cross-account tutorial or explanation guide The Solution This is how looks a segment of the code on my production Jenkins declarative pipeline.\nLook at withAwsEnVars (lines: 4, 11) pipeline tags\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ... stage(\u0026#39;setup repositories\u0026#39;) { steps { withAwsEnVars(roleName: cicd.getRole(), roleAccount: codeArtifact.getOwner(\u0026#39;npm-private\u0026#39;)) { script { log.info(\u0026#39;setting repositories \\\u0026#39;npm-private\\\u0026#39; credentials for dependencies\u0026#39;) codeArtifact.setupNpmrc(\u0026#39;npm-private\u0026#39;, \u0026#39;@my-company-namespace\u0026#39;, params.timeoutTime*60) codeArtifact.setupNpmrc(\u0026#39;npm-private\u0026#39;, \u0026#39;@my-company-other-namespace\u0026#39;, params.timeoutTime*60) } } withAwsEnVars(roleName: cicd.getRole(), roleAccount: codeArtifact.getOwner(\u0026#39;npm-public\u0026#39;)) { script { log.info(\u0026#39;setting repositories \\\u0026#39;npm-public\\\u0026#39; credentials for dependencies\u0026#39;) codeArtifact.setupNpmrc(\u0026#39;npm-public\u0026#39;, \u0026#39;\u0026#39;, params.timeoutTime*60) } } } } ... withAwsEnVars is a Groovy function used in my Jenkins Shared Library and this is the withAwsEnVars.groovy file content:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 #!/usr/bin/env groovy /* paramters: roleArn (required) roleAccount (required) sessionName (optional) sessionDuration (optional) Examples: stage(\u0026#39;test aws credential\u0026#39;) { steps { withAwsEnVars(roleName:\u0026#39;cicd-execution-role\u0026#39;, roleAccount: \u0026#39;12345678910\u0026#39;) { sh \u0026#34;echo TOKEN: ${AWS_SESSION_TOKEN}\u0026#34; sh \u0026#34;echo KEY: ${AWS_SECRET_ACCESS_KEY}\u0026#34; sh \u0026#34;echo ID: ${AWS_ACCESS_KEY_ID}\u0026#34; sh \u0026#39;aws s3 ls\u0026#39; } sh \u0026#34;exit 1\u0026#34; } } */ def call(Map params, Closure body) { if (!params.roleName) { error \u0026#34;\u0026#34;\u0026#34; parameter \u0026#39;roleName\u0026#39; is required. --- Example: withAwsEnVars(roleName:\u0026#39;cicd-execution-role\u0026#39;, roleAccount: \u0026#39;12345678910\u0026#39;) {...} \u0026#34;\u0026#34;\u0026#34; } if (!params.roleAccount) { error \u0026#34;\u0026#34;\u0026#34; parameter \u0026#39;roleAccount\u0026#39; is required. --- Example: withAwsEnVars(roleName:\u0026#39;cicd-execution-role\u0026#39;, roleAccount: \u0026#39;12345678910\u0026#39;) {...} \u0026#34;\u0026#34;\u0026#34; } // get optional parameters if not set default String sessionName = params.get(\u0026#39;sessionName\u0026#39;, \u0026#39;jenkins\u0026#39;) Integer duration = params.get(\u0026#39;sessionDuration\u0026#39;, 900) cred = awsCredentials.getFromAssumeRole(params.roleName, params.roleAccount, sessionName, duration) AWS_ACCESS_KEY_ID = cred.AccessKeyId AWS_SECRET_ACCESS_KEY = cred.SecretAccessKey AWS_SESSION_TOKEN = cred.SessionToken wrap([ $class: \u0026#39;MaskPasswordsBuildWrapper\u0026#39;, varPasswordPairs: [ [password: AWS_ACCESS_KEY_ID], [password: AWS_SECRET_ACCESS_KEY], [password: AWS_SESSION_TOKEN] ] ]) { withEnv([ \u0026#34;AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}\u0026#34;, \u0026#34;AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}\u0026#34;, \u0026#34;AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN}\u0026#34; ]) { body() } } } and this is my awsCredentials.groovy file content:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 #!/usr/bin/env groovy /* Parameters: roleName (required) roleAccount (required) Examples: cred = awsCredentials.getFromAssumeRole(...) accessKey = awsCredentials.getFromAssumeRole(...).AccessKeyId */ String getFromAssumeRole(String roleName, String roleAccount, String sessionName=\u0026#39;jenkins\u0026#39;, Integer duration=900){ String roleArn = \u0026#39;arn:aws:iam::\u0026#39; + roleAccount +\u0026#39;:role/\u0026#39;+ roleName List\u0026lt;String\u0026gt; options = [] options += \u0026#34;--role-arn ${roleArn}\u0026#34; options += \u0026#34;--role-session-name ${sessionName}\u0026#34; options += \u0026#34;--duration-seconds ${duration}\u0026#34; options += \u0026#34;--query \u0026#39;Credentials\u0026#39;\u0026#34; optionsString = options.join(\u0026#34; \u0026#34;) // this is used to mask any critical information wrap([$class: \u0026#39;MaskPasswordsBuildWrapper\u0026#39;, varPasswordPairs: [[password: roleArn], [password: sessionName]]]) { String strCreds = sh( returnStdout: true, script: \u0026#34;\u0026#34;\u0026#34; aws sts assume-role ${optionsString} \u0026#34;\u0026#34;\u0026#34;).trim() return readJSON(text: strCreds) } } from the code above, definitions are located in the Jenkins Shared Library\ncodeArtifact.setupNpmrc(\u0026hellip;) -\u0026gt; codeArtifact.groovy codeArtifact.getOwner(\u0026hellip;) -\u0026gt; codeArtifact.groovy cicd.getRole() -\u0026gt; cicd.groovy The minimal requirements on your Jenkins controller and agents AWS CLI Jenkins Plugin – Pipeline Utility Steps –\u0026gt; How to use? Jenkins Plugin – Pipeline: Basic Steps –\u0026gt;How to use? Jenkins Plugin – Mask Passwords –\u0026gt; How to use? So, What is the Magic? Why do I say this is secure? Maybe after looking at the following pipeline code, you will see how easy is to use this, and this is secure because if you execute the following code in your pipeline:\n1 2 3 4 5 6 7 8 9 10 11 12 13 ... stage(\u0026#39;test aws credential\u0026#39;) { steps { withAwsEnVars(roleName:\u0026#39;cicd-execution-role\u0026#39;, roleAccount: \u0026#39;12345678910\u0026#39;) { sh \u0026#34;echo TOKEN: ${AWS_SESSION_TOKEN}\u0026#34; sh \u0026#34;echo KEY: ${AWS_SECRET_ACCESS_KEY}\u0026#34; sh \u0026#34;echo ID: ${AWS_ACCESS_KEY_ID}\u0026#34; sh \u0026#39;aws s3 ls\u0026#39; } sh \u0026#34;exit 1\u0026#34; } } ... you will see masked the TOKEN, KEY and ID. Instead of seeing the real value, you will see *********** characters.\nTools and Concepts There are various tools and concepts I used here, the first that allows me to do that so easily was Groovy Closures and it is explained how to use on Jenkins Shared Library –\u0026gt; Defining custom steps.\nThen we have the tool withEnv provided by the Jenkins Plugin – Pipeline: Basic Steps and in combination with the use of the Groovy Closures allowed me to export the AWS Environment Variables coming from awsCredentials.getFromAssumeRole(…) groovy function into a container script part.\nBut, make sure that anyone who will use Jenkins Controller and Pipeline doesn’t have access to the values of the AWS Environment Variables is the job of wrap provided by the Jenkins Plugin – Pipeline: Basic Steps + maskPasswords provided by Jenkins Plugin – Mask Passwords.\nOthers Links https://www.jenkins.io/doc/book/installing/ https://www.jenkins.io/doc/book/using/ https://www.jenkins.io/doc/book/security/#securing-jenkins https://www.jenkins.io/doc/book/pipeline/ Closing Even hating Jenkins like me, you can find different ways to do your life easy and secure with him.\nIf you want to look at my GitHub repositories related to Jenkins, here you have:\nhttps://github.com/christiangda/jenkins-casc-controller https://github.com/christiangda/jenkins-shared-library ","permalink":"https://slashdevops.com/post/2022/12/03/1/secure-and-easy-aws-connection-assuming-a-role-with-jenkins-shared-library/","summary":"The Problem Surely and like me, you are trying to be more secure when connecting Jenkins with your AWS Accounts assuming a role. If you are asking What is that? , please read this: https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html\nOf course, there are many different options to use, but the problem always surrounds us, if you use a plugin then the maintainability and security when talking about Jenkins plugins for sure decrease.\nI particularly hate Jenkins, from my point of view this is an obsolete tool trying to survive in the modern world, and if you are concerned about security (and maintainability) sure understand my point.","title":"Secure and Easy AWS Connection Assuming a Role With Jenkins Shared Library"},{"content":"Operating System Install Rosetta 1 /usr/sbin/softwareupdate --install-rosetta --agree-to-license Install Xcode 1 xcode-select --install Package Manager Install Homebrew 1 /bin/bash -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\u0026#34; Add Homebrew to your PATH -\u0026gt; /Users/\u0026lt;user home\u0026gt;/.zprofile 1 2 echo \u0026#39;eval $(/opt/homebrew/bin/brew shellenv)\u0026#39; \u0026gt;\u0026gt; /Users/$USER/.zprofile eval $(/opt/homebrew/bin/brew shellenv) (OPTIONAL) Update/upgrade Homebrew 1 brew update \u0026amp;\u0026amp; brew upgrade Terminal and Mods Install iterm2 1 brew install --cask iterm2 WARNING: After this step close the default term and open iterm2\nInstall Oh My Zsh 1 sh -c \u0026#34;$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\u0026#34; (OPTIONAL) Update Oh My Zsh 1 omz update Install Oh My Zsh useful plugins List available plugins on your local installation 1 ls ~/.oh-my-zsh/plugins Configure my plugins NOTE: my useful plugins list is in the MY_PLUGINS_LIST env var, so check it for yours\n1 2 3 4 # check it to add or remove yours export MY_PLUGINS_LIST=\u0026#34;git aws golang zsh-navigation-tools brew docker docker-compose minikube kubectl ansible virtualenv python rust terraform vscode podman\u0026#34; sed -i\u0026#34;bkup\u0026#34; \u0026#34;s/plugins\\=(git)/plugins\\=($MY_PLUGINS_LIST)/\u0026#34; ~/.zshrc Install and configure Custom Plugins 1 2 3 4 5 6 7 8 9 10 11 # zsh-syntax-highlighting git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting sed -i\u0026#34;bkup\u0026#34; \u0026#39;/plugins\\=/ s/)$/ zsh-syntax-highlighting)/\u0026#39; ~/.zshrc # zsh-autosuggestions git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions sed -i\u0026#34;bkup\u0026#34; \u0026#39;/plugins\\=/ s/)$/ zsh-autosuggestions)/\u0026#39; ~/.zshrc # zsh-completions git clone https://github.com/zsh-users/zsh-completions ${ZSH_CUSTOM:=~/.oh-my-zsh/custom}/plugins/zsh-completions sed -i\u0026#34;bkup\u0026#34; \u0026#39;/plugins\\=/ s/)$/ zsh-completions)/\u0026#39; ~/.zshrc (OPTIONAL) Configure and install Powerlevel10k Theme for Zsh Install Powerlevel10 on Oh My Zsh 1 git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k (OPTIONAL) Update Powerlevel10k 1 git -C ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k pull Configure Oh My Zsh to use Powelevel10k 1 sed -i\u0026#34;bkup\u0026#34; \u0026#39;s/ZSH_THEME\\=\\\u0026#34;robbyrussell\\\u0026#34;/ZSH_THEME\\=\\\u0026#34;powerlevel10k\\/powerlevel10k\\\u0026#34;/\u0026#39; ~/.zshrc (OPTIONAL) Configure Powelevel10k to show only the last directory 1 typeset -g POWERLEVEL9K_SHORTEN_STRATEGY=truncate_to_last WARNING:\nAfter this step close the iterm2 console Once the iterm2 opens, the process of customization starts, and then closes the iterm2 again to begin the process of configuration of Powerlevel10k Configure Zsh history Add history configuration to file ~/.zshrc appending it\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 cat \u0026gt;\u0026gt; ~/.zshrc \u0026lt;\u0026lt;_EOL_ # History HISTFILE=\u0026#34;\\$HOME/.zsh_history\u0026#34; HISTSIZE=500000 SAVEHIST=500000 setopt BANG_HIST # Treat the \u0026#39;!\u0026#39; character specially during expansion. setopt EXTENDED_HISTORY # Write the history file in the \u0026#34;:start:elapsed;command\u0026#34; format. setopt INC_APPEND_HISTORY # Write to the history file immediately, not when the shell exits. setopt SHARE_HISTORY # Share history between all sessions. setopt HIST_EXPIRE_DUPS_FIRST # Expire duplicate entries first when trimming history. setopt HIST_IGNORE_DUPS # Don\u0026#39;t record an entry that was just recorded again. setopt HIST_IGNORE_ALL_DUPS # Delete old recorded entry if new entry is a duplicate. setopt HIST_FIND_NO_DUPS # Do not display a line previously found. setopt HIST_IGNORE_SPACE # Don\u0026#39;t record an entry starting with a space. setopt HIST_SAVE_NO_DUPS # Don\u0026#39;t write duplicate entries in the history file. setopt HIST_REDUCE_BLANKS # Remove superfluous blanks before recording entry. setopt HIST_VERIFY # Don\u0026#39;t execute immediately upon history expansion. setopt HIST_BEEP # Beep when accessing nonexistent history. # End of History _EOL_ Apply changes\n1 source ~/.zshrc IDE and Plugins Install Visual Studio Code Using brew cask\n1 brew install --cask visual-studio-code Add vscode to the PATH useful to call vscode from anywhere\n1 2 3 4 cat \u0026lt;\u0026lt; EOF \u0026gt;\u0026gt; ~/.zprofile # Add Visual Studio Code (code) export PATH=\u0026#34;\\$PATH:/Applications/Visual Studio Code.app/Contents/Resources/app/bin\u0026#34; EOF Reload the terminal 1 source ~/.zprofile Install my default extensions 1 2 3 4 code --install-extension eamodio.gitlens code --install-extension streetsidesoftware.code-spell-checker code --install-extension yzhang.markdown-all-in-one code --install-extension redhat.vscode-yaml Container Management Install Podman Like docker but better because this is free and open source.\nUsing brew cask\n1 brew install podman Install podman-desktop 1 brew install podman-desktop Initialize podman machine This is the container we have in macos to use podman like docker does.\n1 podman machine init NOTE: good reference here https://docs.podman.io/en/latest/markdown/podman-machine-init.1.html\nStart podman machine 1 podman machine start NOTE:\ngood reference here https://docs.podman.io/en/latest/markdown/podman-machine-start.1.html podman could wrapper docker, so if you don’t have docker installed just created a terminal alias in your OS. (OPTIONAL) Create a docker alias to podman This is in case you don’t have installed docker and you want to use podman as a wrapper of this:\n1 2 3 4 5 cat \u0026lt;\u0026lt; EOF \u0026gt;\u0026gt; ~/.zshrc # docker alias to podman # remove it if you want to install docker alias docker=podman EOF ","permalink":"https://slashdevops.com/post/2022/09/17/1/my-custom-macbook-ro-m1-m2-provisioning/","summary":"Operating System Install Rosetta 1 /usr/sbin/softwareupdate --install-rosetta --agree-to-license Install Xcode 1 xcode-select --install Package Manager Install Homebrew 1 /bin/bash -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\u0026#34; Add Homebrew to your PATH -\u0026gt; /Users/\u0026lt;user home\u0026gt;/.zprofile 1 2 echo \u0026#39;eval $(/opt/homebrew/bin/brew shellenv)\u0026#39; \u0026gt;\u0026gt; /Users/$USER/.zprofile eval $(/opt/homebrew/bin/brew shellenv) (OPTIONAL) Update/upgrade Homebrew 1 brew update \u0026amp;\u0026amp; brew upgrade Terminal and Mods Install iterm2 1 brew install --cask iterm2 WARNING: After this step close the default term and open iterm2","title":"My custom MacBook Pro [m1|m2] Provisioning"},{"content":"Managing the Lifecycle of your Elasticsearch Indices Just like me, you are probably storing your [Applications | Infrastructure | IoT ] Logs / Traces (as a time series) into Elasticsearch or at least considering doing it.\nIf that is the case, you might be wondering how to efficiently manage index lifecycles in an automated and clean manner, then this post is for you!\nWhat\u0026rsquo;s happening? Basically, this means that your log management/aggregator applications are storing the logs in Elasticsearch using the timestamp (of capture, processing, or another one) for every record of data and grouping, using a pattern for every group.\nIn Elasticsearch terms, this group of logs is called index and the pattern is referring commonly to the suffix used when you create the index name, e.g.: sample-logs-2020-04-25.\nThe problem Until here everything is ok, right? so the problems begin when your data starts accumulating and you don\u0026rsquo;t want to spend too much time/money to store/maintain/delete it.\nAdditionally, you may be managing all the indices the same, regardless of data retention requirements or access patterns. All the indices have the same number of replicas, shards, disk type, etc. In my case, it is more important the first week of indices than the indices three months old.\nAs I mentioned before, depending on your index name and configuration, you will end up with different indexes aggregating logs based on different timeframes.\n1 2 3 4 5 6 ... sample-logs-2020-04-22 sample-logs-2020-04-23 sample-logs-2020-04-24 ... sample-logs-2020-04-27 It is likely that just like I was doing few months back, you are using your custom Script/Application implementing the Elasticsearch Curator API or going directly over the Elasticsearch index API to delete or maintain your indices lifecycle or worst, you are storing your indices forever without any kind of control or deleting it manually.\nMy logs cases Case 1 Third-party applications that use their own index name pattern like indexname-yyyy-mm-dd and I cannot / I don\u0026rsquo;t want to change it. e.g. Zipkin (zipkin-2020-04-25)\nCase 2 My own log aggregator (custom AWS Lambda function) and/or third-party applications like fluentd, LogStash, etc. That allows me to change the index name pattern. So, in this case I can decide how to aggregate my logs and the index pattern name I want to use.\nOne more thing before moving on: The term used for Elasticsearch to create a index per day, hour, month, etc. is rollover.\nThe Solution Preliminaries In my case, I’m using AWS Elasticsearch Service which is a “little bit different” from the Elasticsearch Elastic since AWS decided to create their own Elasticsearch fork called Open Distro for Elasticsearch.\nthe key terms to understand “Index Lifecycle” in every Elasticsearch distribution is:\nIndex State Management (ISM) → Open Distro for Elasticsearch Index lifecycle management (ILM) → Elasticsearch Elastic ElasticSearch concepts are out of the scope of this post, in the below cases I will explain how Open Distro for Elasticsearch manages its indices lifecycle.\nCase 1 \u0026hellip; Remember above.\nThe log management/aggregation application makes the “rollover” of my indices, but I would like to delete/change those after the index has rolled — The most common\nCreate an Index State Management Policy to delete indices based on time and/or size and using an Elasticsearch Templates and Elasticsearch Aliases your Elasticsearch engine can delete your indices periodically.\nAnd yes, the result is very similar to what I was doing with my custom AWS Lambda function in Python using Elasticsearch Curator API\nBut, without the hassle of writing any code, handling connection errors, upgrade my code every time my Elasticsearch was upgraded, credentials, changing the env vars to pass the new indices name, etc.\nNow, thanks to ISM I can use a JSON declarative language to define some rules (policies) and the Elasticsearch engine is in charge of the rest.\nPolicies? … imagine you can implement this kind of rules\nKeep “my fresh indices” open to write for 2 days, then After the 2 first days, closes those indices for write operations and keep them until 13 days more, then 15 days after index creation, delete it forever, end What does this means? well, after learning about the ISM Policies and using Kibana Dev Tool, I created a policy name delete_after_15d following the rules described above, and here you have it.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 # ISM Policy delete_after_15d PUT _opendistro/_ism/policies/delete_after_15d { \u0026#34;policy\u0026#34;: { \u0026#34;policy_id\u0026#34;: \u0026#34;delete_after_15d\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Maintains the indices open by 2 days, then closes those and delete indices after 15 days\u0026#34;, \u0026#34;default_state\u0026#34;: \u0026#34;ReadWrite\u0026#34;, \u0026#34;schema_version\u0026#34;: 1, \u0026#34;states\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;ReadWrite\u0026#34;, \u0026#34;actions\u0026#34;: [ { \u0026#34;read_write\u0026#34;: {} } ], \u0026#34;transitions\u0026#34;: [ { \u0026#34;state_name\u0026#34;: \u0026#34;ReadOnly\u0026#34;, \u0026#34;conditions\u0026#34;: { \u0026#34;min_index_age\u0026#34;: \u0026#34;2d\u0026#34; } } ] }, { \u0026#34;name\u0026#34;: \u0026#34;ReadOnly\u0026#34;, \u0026#34;actions\u0026#34;: [ { \u0026#34;read_only\u0026#34;: {} } ], \u0026#34;transitions\u0026#34;: [ { \u0026#34;state_name\u0026#34;: \u0026#34;Delete\u0026#34;, \u0026#34;conditions\u0026#34;: { \u0026#34;min_index_age\u0026#34;: \u0026#34;13d\u0026#34; } } ] }, { \u0026#34;name\u0026#34;: \u0026#34;Delete\u0026#34;, \u0026#34;actions\u0026#34;: [ { \u0026#34;delete\u0026#34;: {} } ] } ] } } NOTE: Notice the highlighted lines, are these my rules described above?\nThen using the following Elasticsearch Templates, I applied the policy (above), see template line (below) 8, to my indices following a pattern in the line 5\n1 2 3 4 5 6 7 8 9 10 # Template sample-logs to apply the ISM Policy delete_after_15d to new indices PUT _template/sample-logs { \u0026#34;index_patterns\u0026#34;: [ \u0026#34;sample-logs-*\u0026#34; ], \u0026#34;settings\u0026#34;: { \u0026#34;index.opendistro.index_state_management.policy_id\u0026#34;: \u0026#34;delete_after_15d\u0026#34; } } Now what? Is it ready?\nFor the new indices, yes. The indices created after you created this template into your Elasticsearch.\nWhat about the old ones?\nThe indices created before applying the index template. For these we need to change its definition and add the line 5\n1 2 3 4 5 6 7 # Change the oldest indices definition to apply the ISM Policy delete_after_15d PUT sample-logs-2020-*/_settings { \u0026#34;settings\u0026#34;: { \u0026#34;index.opendistro.index_state_management.policy_id\u0026#34;: \u0026#34;delete_after_15d\u0026#34; } } But, how do I complete the tasks you mention before?\nDon’t worry, keep calm!, here https://github.com/slashdevops/es-lifecycle-ism you have the complete explanation to apply this rule in your own Elasticsearch, also how to test it into an Elasticsearch instance or create it locally with docker-compose.\nCase 2 My Elasticsearch rolls over the indices base on time and/or size and I want to have only one entry point (index) to send my logs — I think it is the best one\nThe rules again\nRollover “my fresh indices” after 1 day, then Close those indices for write operations and keep it until 13 days more, then After 15 days of the index was created, delete it forever, end Well, to do that I create an ISM policy named rollover_1d_delete_after_15 to control the state of my indices and using the rollover action.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 # ISM Policy rollover_1d_delete_after_15 PUT _opendistro/_ism/policies/rollover_1d_delete_after_15 { \u0026#34;policy\u0026#34;: { \u0026#34;policy_id\u0026#34;: \u0026#34;rollover_1d_delete_after_15\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Rollover every 1d, then closes those and delete indices after 15 days\u0026#34;, \u0026#34;default_state\u0026#34;: \u0026#34;Rollover\u0026#34;, \u0026#34;schema_version\u0026#34;: 1, \u0026#34;states\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;Rollover\u0026#34;, \u0026#34;actions\u0026#34;: [ { \u0026#34;rollover\u0026#34;: { \u0026#34;min_index_age\u0026#34;: \u0026#34;1d\u0026#34; } } ], \u0026#34;transitions\u0026#34;: [ { \u0026#34;state_name\u0026#34;: \u0026#34;ReadOnly\u0026#34;, \u0026#34;conditions\u0026#34;: { \u0026#34;min_index_age\u0026#34;: \u0026#34;2d\u0026#34; } } ] }, { \u0026#34;name\u0026#34;: \u0026#34;ReadOnly\u0026#34;, \u0026#34;actions\u0026#34;: [ { \u0026#34;read_only\u0026#34;: {} } ], \u0026#34;transitions\u0026#34;: [ { \u0026#34;state_name\u0026#34;: \u0026#34;Delete\u0026#34;, \u0026#34;conditions\u0026#34;: { \u0026#34;min_index_age\u0026#34;: \u0026#34;13d\u0026#34; } } ] }, { \u0026#34;name\u0026#34;: \u0026#34;Delete\u0026#34;, \u0026#34;actions\u0026#34;: [ { \u0026#34;delete\u0026#34;: {} } ], \u0026#34;transitions\u0026#34;: [] } ] } } NOTE: Notice the highlighted lines, did you see the rollover action?\nThen like in case 1, using the following Elasticsearch Templates, I applied the policy above, see template (below) line 7, to my indices following a pattern in the line 5.\n1 2 3 4 5 6 7 8 9 10 11 12 # Template sample-logs-rollover to apply the ISM Policy # rollover_1d_delete_after_15 to new indices PUT _template/sample-logs-rollover { \u0026#34;index_patterns\u0026#34;: [ \u0026#34;sample-logs-rollover-*\u0026#34; ], \u0026#34;settings\u0026#34;: { \u0026#34;index.opendistro.index_state_management.policy_id\u0026#34;: \u0026#34;rollover_1d_delete_after_15\u0026#34;, \u0026#34;index.opendistro.index_state_management.rollover_alias\u0026#34;: \u0026#34;sample-logs-rollover\u0026#34; } } What does it means?\nNow Elasticsearch Engine will be in charge of rollover the indices and you don’t need to create any index name pattern when indexing your data over Elasticsearch, in other words, your Application logs’ aggregator doesn’t need to rollover your indices.\nThe last step and obligatory to trigger all the rollover processes inside Elasticsearch, it creates the first rollover index according to the template and aliases defined inside this.\n1 2 3 4 5 6 7 8 9 10 # Create the first rollover manually (it is necessary) # to trigger ISM Policy association PUT sample-logs-rollover-000001 { \u0026#34;aliases\u0026#34;: { \u0026#34;sample-logs-rollover\u0026#34;:{ \u0026#34;is_write_index\u0026#34;: true } } } So, How do I index my data now?\nUsing the rollover alias (template definition above line 5) created in the Elasticsearch template. Now you have only one index name (index alias) to configure your Custom Program / LogStash / Fluentd, etc and you can forget the suffix pattern.\nHere is an example of how to insert data using the rollover index alias:\n1 2 3 4 5 6 # Bulk load sample, NOTE: To insert data use the rollover aliases POST _bulk {\u0026#34;index\u0026#34;: { \u0026#34;_index\u0026#34;: \u0026#34;sample-logs-rollover\u0026#34;}} {\u0026#34;message\u0026#34;: \u0026#34;This is a log sample 1\u0026#34;, \u0026#34;@timestamp\u0026#34;: \u0026#34;2020-04-26T11:07:00+0000\u0026#34;} {\u0026#34;index\u0026#34;: { \u0026#34;_index\u0026#34;: \u0026#34;sample-logs-rollover\u0026#34;}} {\u0026#34;message\u0026#34;: \u0026#34;This is a log sample 2\u0026#34;, \u0026#34;@timestamp\u0026#34;: \u0026#34;2020-04-26T11:08:00+0000\u0026#34;} Conclusions If you have Elasticsearch as your logs storage and index platform and you never used before or heard about it:\nIndex State Management (ISM) → Open Distro for Elasticsearch Index lifecycle management (ILM) → Elasticsearch Elastic Then, Go fast and learn how to apply this to improve your everyday job.\nAcknowledgements This was possible thanks to my friend Alejandro Sabater, who took his free time to review it and share its recommendation with me.\n","permalink":"https://slashdevops.com/post/2020/05/13/1/managing-the-lifecycle-of-your-elasticsearch-indices/","summary":"Managing the Lifecycle of your Elasticsearch Indices Just like me, you are probably storing your [Applications | Infrastructure | IoT ] Logs / Traces (as a time series) into Elasticsearch or at least considering doing it.\nIf that is the case, you might be wondering how to efficiently manage index lifecycles in an automated and clean manner, then this post is for you!\nWhat\u0026rsquo;s happening? Basically, this means that your log management/aggregator applications are storing the logs in Elasticsearch using the timestamp (of capture, processing, or another one) for every record of data and grouping, using a pattern for every group.","title":"Managing the Lifecycle of your Elasticsearch Indices"}]