Distributed File System written in C

feat: First commit.

stau.space 6ac65be9

+1466
+1
.envrc
··· 1 + use flake
+2
.gitignore
··· 1 + .direnv 2 + .build
+16
LICENSE
··· 1 + MIT No Attribution 2 + 3 + Copyright <year> Sona Tau Estrada Rivera <sona@stau.space> 4 + 5 + Permission is hereby granted, free of charge, to any person obtaining a copy of this 6 + software and associated documentation files (the "Software"), to deal in the Software 7 + without restriction, including without limitation the rights to use, copy, modify, 8 + merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 9 + permit persons to whom the Software is furnished to do so. 10 + 11 + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 12 + INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 13 + PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 14 + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 15 + OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 16 + SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+333
README.md
··· 1 + # Project 2 + 3 + To run this project use: 4 + 5 + ```sh 6 + make run 7 + ``` 8 + 9 + To compile this project use: 10 + 11 + ```sh 12 + make build 13 + ``` 14 + 15 + # TODO 16 + - [ ] createdb.py 17 + - [ ] testdb.py 18 + - [ ] ls.py 19 + - [ ] meta-data.py 20 + - [ ] implementar `reg` para recibir informacion de los data nodes 21 + 22 + 23 + # Assignment 04: Distributed File systems 24 + 25 + The components to implement are: 26 + 27 + * **Metadata server**, which will function as an inodes repository 28 + * **Data servers**, that will serve as the disk space for file data blocks 29 + * **List client**, that will list the files available in the DFS 30 + * **Copy client**, that will copy files from and to the DFS 31 + 32 + # Objectives 33 + 34 + * Study the main components of a distributed file system 35 + * Get familiarized with File Management 36 + * Implementation of a distributed system 37 + 38 + # Prerequisites 39 + 40 + * Python: 41 + * [www.python.org](http://www.python.org/) 42 + * Python SocketServer library: for **TCP** socket communication. 43 + * https://docs.python.org/3/library/socketserver.html 44 + * uuid: to generate unique IDs for the data blocks 45 + * https://docs.python.org/3/library/uuid.html 46 + * **Optionally** you may read about the json and sqlite3 libraries used in the 47 + skeleton of the program. 48 + * https://docs.python.org/3/library/json.html 49 + * https://docs.python.org/3/library/sqlite3.html 50 + 51 + ### **The metadata server's database manipulation functions.** 52 + 53 + No expertise in database management is required to accomplish this project. 54 + However sqlite3 is used to store the file inodes in the metadata server. You 55 + don't need to understand the functions but you need to read the documentation 56 + of the functions that interact with the database. The metadata server database 57 + functions are defined in file mds\_db.py. 58 + 59 + #### **Inode** 60 + 61 + For this implementation an **inode** consists of: 62 + 63 + * File name 64 + * File size 65 + * List of blocks 66 + 67 + #### **Block List** 68 + 69 + The **block list** consists of a list of: 70 + 71 + * data node address \- to know the data node the block is stored 72 + * data node port \- to know the service port of the data node 73 + * data node block\_id \- the id assigned to the block 74 + 75 + Functions: 76 + 77 + * AddDataNode(address, port): Adds new data node to the metadata server 78 + Receives IP address and port. I.E. the information to connect to the data node. 79 + 80 + * GetDataNodes(): Returns a list of data node tuples **(address, port)** 81 + registered. Useful to know to which data nodes the data blocks can be sent. 82 + * InsertFile(filename, fsize): Insert a filename with its file size into the 83 + database. 84 + * GetFiles(): Returns a list of the attributes of the files stored in the DFS. 85 + (addr, file size) 86 + * AddBlockToInode(filename, blocks): Add the list of data blocks information of 87 + a file. The data block information consists of (address, port, block\_id) 88 + * GetFileInode(filename): Returns the file size, and the list of data block 89 + information of a file. (fsize, block\_list) 90 + 91 + ### **The packet manipulation functions:** 92 + 93 + The packet library is designed to serialize the communication data using the 94 + json library. No expertise with json is required to accomplish this assignment. 95 + These functions were developed to ease the packet generation process of the 96 + project. The packet library is defined in file Packet.py. 97 + 98 + In this project all packet objects have a packet type among the following 99 + command type options: 100 + 101 + * reg: to register a data node 102 + * list: to ask for a list of files 103 + * put: to put a files in the DFS 104 + * get: to get files from the DFS 105 + * dblks: to add the data block ids to the files. 106 + 107 + #### **Functions:** 108 + 109 + ##### **General Functions** 110 + 111 + * getEncodedPacket(): returns a serialized packet ready to send through the 112 + network. First you need to build the packets. See Build**\<X\>**Packet 113 + functions. 114 + * DecodePacket(packet): Receives a serialized message and turns it into a 115 + packet object. 116 + * getCommand(): Returns the command type of the packet 117 + 118 + ##### **Packet Registration Functions** 119 + 120 + * BuildRegPacket(addr, port): Builds a registration packet. 121 + * getAddr(): Returns the IP address of a server. Useful for registration 122 + packets 123 + * getPort(): Returns the Port number of a server. Useful for registration 124 + packets 125 + 126 + ##### **Packet List Functions** 127 + 128 + * BuildListPacket(): Builds a list packet for file listing 129 + * BuildListResponse(filelist): Builds a list response packet with the list of 130 + files. 131 + * getFileArray(): Returns a list of files 132 + 133 + ##### **Get Packet Functions** 134 + 135 + * BuildGetPacket(fname): Builds a get packet to get a file name. 136 + * BuildGetResponse(metalist, fsize): Builds a list of data node servers with 137 + the blocks of a file, and the file size. 138 + * getFileName(): Returns the file name in a packet. 139 + * getDataNodes(): Returns a list of data servers. 140 + 141 + ##### **Put Packet Functions (Put Blocks)** 142 + 143 + * BuildPutPacket(fname, size): Builds a put packet to put fname and file size 144 + in the metadata server. 145 + * getFileInfo(): Returns the file info in a packet. 146 + * BuildPutResponse(metalist): Builds a list of data node servers where the data 147 + blocks of a file can be stored. I.E a list of available data servers. 148 + * BuildDataBlockPacket(fname, block\_list): Builds a data block packet. 149 + Contains the file name and the list of blocks for the file. See [block 150 + list](http://ccom.uprrp.edu/~jortiz/clases/ccom4017/asig04/#block_list) to 151 + review the content of a block list. 152 + * getDataBlocks(): Returns a list of data blocks 153 + 154 + ##### **Get Data block Functions (Get Blocks)** 155 + 156 + * BuildGetDataBlockPacket(blockid): Builds a get data block packet. Usefull 157 + when requesting a data block from a data node. 158 + * getBlockID(): Returns the block\_id from a packet. 159 + 160 + # Instructions 161 + 162 + Write and complete code for an unreliable and insecure distributed file server 163 + following the specifications below. 164 + 165 + ### **Design specifications.** 166 + 167 + For this project you will design and complete a distributed file system. You 168 + will write a DFS with tools to list the files, and to copy files from and to 169 + the DFS. 170 + 171 + Your DFS will consist of: 172 + 173 + * A metadata server: which will contain the metadata (inode) information of the 174 + files in your file system. It will also keep a registry of the data servers 175 + that are connected to the DFS. 176 + * Data nodes: The data nodes will contain chunks (some blocks) of the file that 177 + you are storing in the DFS. 178 + * List command: A command to list the files stored in the DFS. 179 + * Copy command: A command that will copy files from and to the DFS. 180 + 181 + ### **The metadata server** 182 + 183 + The metadata server contains the metadata (inode) information of the files in 184 + your file system. It will also keep a registry of the data servers that are 185 + connected to the DFS. 186 + 187 + Your metadata server must provide the following services: 188 + 189 + 1. Listen to the data nodes that are part of the DFS. Every time a new data 190 + node registers to the DFS the metadata server must keep the contact information 191 + of that data node. This is (IP Address, Listening Port). 192 + * To ease the implementation of the DFS, the directory file system must 193 + contain three things: 194 + * the path of the file in the file system (filename) 195 + * the nodes that contain the data blocks of the files 196 + * the file size 197 + 2. Every time a client (commands list or copy) contacts the metadata server 198 + for: 199 + * get: requesting to read a file: the metadata server must check if the file 200 + is in the DFS database, and if it is, it must return the nodes with the 201 + blocks\_ids that contain the file. 202 + * put: requesting to write a file: the metadata server must: 203 + * insert in the database the path of the new file (with its name), and its 204 + size. 205 + * return a list of available data nodes where to write the chunks of the 206 + file 207 + * dblks: then store the data blocks that have the information of the data 208 + nodes and the block ids of the file. 209 + * list: requesting to list files: 210 + * the metadata server must return a list with the files in the DFS and 211 + their size. 212 + 213 + The metadata server must be run: 214 + 215 + python meta-data.py \<port, default=8000\> 216 + 217 + If no port is specified the port 8000 will be used by default. 218 + 219 + ### **The data node server** 220 + 221 + The data node is the process that receives and saves the data blocks of the 222 + files. It must first register with the metadata server as soon as it starts its 223 + execution. The data node receives the data from the clients when the client 224 + wants to write a file, and returns the data when the client wants to read a 225 + file. 226 + 227 + Your data node must provide the following services: 228 + 229 + 1. put: Listen to writes: 230 + * The data node will receive blocks of data, store them using an unique id, 231 + and return the unique id. 232 + * Each node must have its own block storage path. You may run more than one 233 + data node per system. 234 + 2. get: Listen to reads 235 + * The data node will receive requests for data blocks, and it must read the 236 + data block, and return its content. 237 + 238 + The data nodes must be run: 239 + 240 + python data-node.py \<server address\> \<port\> \<data path\> \<metadata 241 + port,default=8000\> 242 + 243 + Server address is the metadata server address, port is the data-node port 244 + number, data path is a path to a directory to store the data blocks, and 245 + metadata port is the optional metadata port if it was run in a different port 246 + other than the default port. 247 + 248 + **Note:** Since you most probably do not have many different computers at your 249 + disposal, you may run more than one data-node in the same computer but the 250 + listening port and their data block directory must be different. 251 + 252 + ### **The list client** 253 + 254 + The list client just sends a list request to the metadata server and then waits 255 + for a list of file names with their size. 256 + 257 + The output must look like: 258 + 259 + /home/cheo/asig.cpp 30 bytes 260 + /home/hola.txt 200 bytes 261 + /home/saludos.dat 2000 bytes 262 + 263 + The list client must be run: 264 + 265 + python ls.py \<server\>:\<port, default=8000\> 266 + 267 + Where server is the metadata server IP and port is the metadata server port. If 268 + the default port is not indicated the default port is 8000 and no ':' character 269 + is necessary. 270 + 271 + ### **The copy client** 272 + 273 + The copy client is more complicated than the list client. It is in charge of 274 + copying the files from and to the DFS. 275 + 276 + The copy client must: 277 + 278 + 1. Write files in the DFS 279 + * The client must send to the metadata server the file name and size of the 280 + file to write. 281 + * Wait for the metadata server response with the list of available data 282 + nodes. 283 + * Send the data blocks to each data node. 284 + * You may decide to divide the file over the number of data servers. 285 + * You may divide the file into X size blocks and send it to the data 286 + servers in round robin. 287 + 2. Read files from the DFS 288 + * Contact the metadata server with the file name to read. 289 + * Wait for the block list with the bloc id and data server information 290 + * Retrieve the file blocks from the data servers. 291 + * This part will depend on the division algorithm used in step (1). 292 + 293 + The copy client must be run: 294 + 295 + Copy from DFS: 296 + 297 + python copy.py \<server\>:\<port\>:\<dfs file path\> \<destination file\> 298 + 299 + To DFS: 300 + 301 + python copy.py \<source file\> \<server\>:\<port\>:\<dfs file path\> 302 + 303 + Where server is the metadata server IP address, and port is the metadata server 304 + port. 305 + 306 + # Creating an empty database 307 + 308 + The script createdb.py generates an empty database *dfs.db* for the project. 309 + 310 + python createdb.py 311 + 312 + # Deliverables 313 + 314 + * The source code of the programs (well documented) 315 + * A README file with: 316 + * description of the programs, including a brief description of how they 317 + work. 318 + * who helped you or discussed issues with you to finish the program. 319 + * Video description of the project with implementation details. Any doubt 320 + please consult the professor. 321 + 322 + # Rubric 323 + 324 + * (10 pts) the programs run 325 + * (80 pts) quality of the working solutions 326 + * (20 pts) Metadata server implemented correctly 327 + * (25 pts) Data server implemented correctly 328 + * (10 pts) List client implemented correctly 329 + * (25 pts) Copy client implemented correctly 330 + * (10 pts) quality of the README 331 + * (10 pts) description of the programs with their description. 332 + * No project will be graded without submission of the video explaining how the 333 + project was implemented.
+41
build.sh
··· 1 + #!/usr/bin/env sh 2 + set -e 3 + 4 + SRC=src 5 + BUILD=.build/ 6 + CC=gcc 7 + 8 + run() { 9 + FILE=src/$1 10 + URI=$2 11 + 12 + tcc -run ${FILE} ${URI} 13 + } 14 + 15 + build() { 16 + FILE=$1 17 + URI=$2 18 + 19 + OUT=.build/${FILE} 20 + IN=src/${FILE} 21 + 22 + mkdir -p .build 23 + gcc -std=gnu99 -O3 -Wno-builtin-declaration-mismatch ${SRC}/lib/*.c ${IN} -o ${OUT} 24 + ./${OUT} $URI 25 + } 26 + 27 + case $1 in 28 + copy.py) 29 + build copy.c $2 30 + ;; 31 + copy) 32 + run copy.c $2 33 + ;; 34 + cluster.py) 35 + build cluster.c ${ADDRESS} ${PORT} 36 + ;; 37 + *) 38 + echo "You must provide a file." 39 + exit 1 40 + ;; 41 + esac
+25
flake.lock
··· 1 + { 2 + "nodes": { 3 + "nixpkgs": { 4 + "locked": { 5 + "lastModified": 1762111121, 6 + "narHash": "sha256-4vhDuZ7OZaZmKKrnDpxLZZpGIJvAeMtK6FKLJYUtAdw=", 7 + "rev": "b3d51a0365f6695e7dd5cdf3e180604530ed33b4", 8 + "revCount": 888552, 9 + "type": "tarball", 10 + "url": "https://api.flakehub.com/f/pinned/NixOS/nixpkgs/0.1.888552%2Brev-b3d51a0365f6695e7dd5cdf3e180604530ed33b4/019a4ac5-41ea-7209-b0c4-883187b7dcdd/source.tar.gz" 11 + }, 12 + "original": { 13 + "type": "tarball", 14 + "url": "https://flakehub.com/f/NixOS/nixpkgs/0.1" 15 + } 16 + }, 17 + "root": { 18 + "inputs": { 19 + "nixpkgs": "nixpkgs" 20 + } 21 + } 22 + }, 23 + "root": "root", 24 + "version": 7 25 + }
+45
flake.nix
··· 1 + { 2 + description = "Declarations for the environment that this project will use."; 3 + 4 + # Flake inputs 5 + inputs.nixpkgs.url = "https://flakehub.com/f/NixOS/nixpkgs/0.1"; 6 + 7 + # Flake outputs 8 + outputs = inputs: 9 + let 10 + # The systems supported for this flake 11 + supportedSystems = [ 12 + "x86_64-linux" # 64-bit Intel/AMD Linux 13 + "aarch64-linux" # 64-bit ARM Linux 14 + "x86_64-darwin" # 64-bit Intel macOS 15 + "aarch64-darwin" # 64-bit ARM macOS 16 + ]; 17 + 18 + # Helper to provide system-specific attributes 19 + forEachSupportedSystem = f: inputs.nixpkgs.lib.genAttrs supportedSystems (system: f { 20 + pkgs = import inputs.nixpkgs { inherit system; }; 21 + }); 22 + in 23 + { 24 + devShells = forEachSupportedSystem ({ pkgs }: { 25 + default = pkgs.mkShell { 26 + # The Nix packages provided in the environment 27 + # Add any you need here 28 + packages = with pkgs; [ 29 + tinycc 30 + gcc 31 + gnumake 32 + clang-tools 33 + lldb 34 + ]; 35 + 36 + # Set any environment variables for your dev shell 37 + env = { }; 38 + 39 + # Add any shell logic you want executed any time the environment is activated 40 + shellHook = '' 41 + ''; 42 + }; 43 + }); 44 + }; 45 + }
+333
instructions.md
··· 1 + # Assignment 04: Distributed File systems 2 + 3 + University of Puerto Rico at Rio Piedras 4 + 5 + Department of Computer Science 6 + 7 + CCOM4017: Operating Systems 8 + 9 + # Introduction 10 + 11 + In this project the student will implement the main components of a file system 12 + by implementing a simple, yet functional, distributed file system (DFS). The 13 + project will expand students' knowledge of the main components of a file system 14 + (inodes, and data blocks), will further develop the student skills in 15 + inter-process communication, and will increase their system security awareness. 16 + 17 + The components to implement are: 18 + 19 + * **Metadata server**, which will function as an inodes repository 20 + * **Data servers**, that will serve as the disk space for file data blocks 21 + * **List client**, that will list the files available in the DFS 22 + * **Copy client**, that will copy files from and to the DFS 23 + 24 + # Objectives 25 + 26 + * Study the main components of a distributed file system 27 + * Get familiarized with File Management 28 + * Implementation of a distributed system 29 + 30 + # Prerequisites 31 + 32 + * Python: 33 + * [www.python.org](http://www.python.org/) 34 + * Python SocketServer library: for **TCP** socket communication. 35 + * 36 + [https://docs.python.org/3/library/socketserver.html](https://docs.python.org/3/ 37 + library/socketserver.html) 38 + * uuid: to generate unique IDs for the data blocks 39 + * 40 + [https://docs.python.org/3/library/uuid.html](https://docs.python.org/2/library/ 41 + uuid.html) 42 + * **Optionally** you may read about the json and sqlite3 libraries used in the 43 + skeleton of the program. 44 + * 45 + [https://docs.python.org/3/library/json.html](https://docs.python.org/3/library/ 46 + json.html) 47 + * 48 + [https://docs.python.org/3/library/sqlite3.html](https://docs.python.org/3/libra 49 + ry/sqlite3.html) 50 + 51 + ### **The metadata server's database manipulation functions.** 52 + 53 + No expertise in database management is required to accomplish this project. 54 + However sqlite3 is used to store the file inodes in the metadata server. You 55 + don't need to understand the functions but you need to read the documentation 56 + of the functions that interact with the database. The metadata server database 57 + functions are defined in file mds\_db.py. 58 + 59 + #### **Inode** 60 + 61 + For this implementation an **inode** consists of: 62 + 63 + * File name 64 + * File size 65 + * List of blocks 66 + 67 + #### **Block List** 68 + 69 + The **block list** consists of a list of: 70 + 71 + * data node address \- to know the data node the block is stored 72 + * data node port \- to know the service port of the data node 73 + * data node block\_id \- the id assigned to the block 74 + 75 + Functions: 76 + 77 + * AddDataNode(address, port): Adds new data node to the metadata server 78 + Receives IP address and port. I.E. the information to connect to the data node. 79 + 80 + * GetDataNodes(): Returns a list of data node tuples **(address, port)** 81 + registered. Useful to know to which data nodes the data blocks can be sent. 82 + * InsertFile(filename, fsize): Insert a filename with its file size into the 83 + database. 84 + * GetFiles(): Returns a list of the attributes of the files stored in the DFS. 85 + (addr, file size) 86 + * AddBlockToInode(filename, blocks): Add the list of data blocks information of 87 + a file. The data block information consists of (address, port, block\_id) 88 + * GetFileInode(filename): Returns the file size, and the list of data block 89 + information of a file. (fsize, block\_list) 90 + 91 + ### **The packet manipulation functions:** 92 + 93 + The packet library is designed to serialize the communication data using the 94 + json library. No expertise with json is required to accomplish this assignment. 95 + These functions were developed to ease the packet generation process of the 96 + project. The packet library is defined in file Packet.py. 97 + 98 + In this project all packet objects have a packet type among the following 99 + command type options: 100 + 101 + * reg: to register a data node 102 + * list: to ask for a list of files 103 + * put: to put a files in the DFS 104 + * get: to get files from the DFS 105 + * dblks: to add the data block ids to the files. 106 + 107 + #### **Functions:** 108 + 109 + ##### **General Functions** 110 + 111 + * getEncodedPacket(): returns a serialized packet ready to send through the 112 + network. First you need to build the packets. See Build**\<X\>**Packet 113 + functions. 114 + * DecodePacket(packet): Receives a serialized message and turns it into a 115 + packet object. 116 + * getCommand(): Returns the command type of the packet 117 + 118 + ##### **Packet Registration Functions** 119 + 120 + * BuildRegPacket(addr, port): Builds a registration packet. 121 + * getAddr(): Returns the IP address of a server. Useful for registration 122 + packets 123 + * getPort(): Returns the Port number of a server. Useful for registration 124 + packets 125 + 126 + ##### **Packet List Functions** 127 + 128 + * BuildListPacket(): Builds a list packet for file listing 129 + * BuildListResponse(filelist): Builds a list response packet with the list of 130 + files. 131 + * getFileArray(): Returns a list of files 132 + 133 + ##### **Get Packet Functions** 134 + 135 + * BuildGetPacket(fname): Builds a get packet to get a file name. 136 + * BuildGetResponse(metalist, fsize): Builds a list of data node servers with 137 + the blocks of a file, and the file size. 138 + * getFileName(): Returns the file name in a packet. 139 + * getDataNodes(): Returns a list of data servers. 140 + 141 + ##### **Put Packet Functions (Put Blocks)** 142 + 143 + * BuildPutPacket(fname, size): Builds a put packet to put fname and file size 144 + in the metadata server. 145 + * getFileInfo(): Returns the file info in a packet. 146 + * BuildPutResponse(metalist): Builds a list of data node servers where the data 147 + blocks of a file can be stored. I.E a list of available data servers. 148 + * BuildDataBlockPacket(fname, block\_list): Builds a data block packet. 149 + Contains the file name and the list of blocks for the file. See [block 150 + list](http://ccom.uprrp.edu/~jortiz/clases/ccom4017/asig04/#block_list) to 151 + review the content of a block list. 152 + * getDataBlocks(): Returns a list of data blocks 153 + 154 + ##### **Get Data block Functions (Get Blocks)** 155 + 156 + * BuildGetDataBlockPacket(blockid): Builds a get data block packet. Usefull 157 + when requesting a data block from a data node. 158 + * getBlockID(): Returns the block\_id from a packet. 159 + 160 + # Instructions 161 + 162 + Write and complete code for an unreliable and insecure distributed file server 163 + following the specifications below. 164 + 165 + ### **Design specifications.** 166 + 167 + For this project you will design and complete a distributed file system. You 168 + will write a DFS with tools to list the files, and to copy files from and to 169 + the DFS. 170 + 171 + Your DFS will consist of: 172 + 173 + * A metadata server: which will contain the metadata (inode) information of the 174 + files in your file system. It will also keep a registry of the data servers 175 + that are connected to the DFS. 176 + * Data nodes: The data nodes will contain chunks (some blocks) of the file that 177 + you are storing in the DFS. 178 + * List command: A command to list the files stored in the DFS. 179 + * Copy command: A command that will copy files from and to the DFS. 180 + 181 + ### **The metadata server** 182 + 183 + The metadata server contains the metadata (inode) information of the files in 184 + your file system. It will also keep a registry of the data servers that are 185 + connected to the DFS. 186 + 187 + Your metadata server must provide the following services: 188 + 189 + 1. Listen to the data nodes that are part of the DFS. Every time a new data 190 + node registers to the DFS the metadata server must keep the contact information 191 + of that data node. This is (IP Address, Listening Port). 192 + * To ease the implementation of the DFS, the directory file system must 193 + contain three things: 194 + * the path of the file in the file system (filename) 195 + * the nodes that contain the data blocks of the files 196 + * the file size 197 + 2. Every time a client (commands list or copy) contacts the metadata server 198 + for: 199 + * get: requesting to read a file: the metadata server must check if the file 200 + is in the DFS database, and if it is, it must return the nodes with the 201 + blocks\_ids that contain the file. 202 + * put: requesting to write a file: the metadata server must: 203 + * insert in the database the path of the new file (with its name), and its 204 + size. 205 + * return a list of available data nodes where to write the chunks of the 206 + file 207 + * dblks: then store the data blocks that have the information of the data 208 + nodes and the block ids of the file. 209 + * list: requesting to list files: 210 + * the metadata server must return a list with the files in the DFS and 211 + their size. 212 + 213 + The metadata server must be run: 214 + 215 + python meta-data.py \<port, default=8000\> 216 + 217 + If no port is specified the port 8000 will be used by default. 218 + 219 + ### **The data node server** 220 + 221 + The data node is the process that receives and saves the data blocks of the 222 + files. It must first register with the metadata server as soon as it starts its 223 + execution. The data node receives the data from the clients when the client 224 + wants to write a file, and returns the data when the client wants to read a 225 + file. 226 + 227 + Your data node must provide the following services: 228 + 229 + 1. put: Listen to writes: 230 + * The data node will receive blocks of data, store them using an unique id, 231 + and return the unique id. 232 + * Each node must have its own block storage path. You may run more than one 233 + data node per system. 234 + 2. get: Listen to reads 235 + * The data node will receive requests for data blocks, and it must read the 236 + data block, and return its content. 237 + 238 + The data nodes must be run: 239 + 240 + python data-node.py \<server address\> \<port\> \<data path\> \<metadata 241 + port,default=8000\> 242 + 243 + Server address is the metadata server address, port is the data-node port 244 + number, data path is a path to a directory to store the data blocks, and 245 + metadata port is the optional metadata port if it was run in a different port 246 + other than the default port. 247 + 248 + **Note:** Since you most probably do not have many different computers at your 249 + disposal, you may run more than one data-node in the same computer but the 250 + listening port and their data block directory must be different. 251 + 252 + ### **The list client** 253 + 254 + The list client just sends a list request to the metadata server and then waits 255 + for a list of file names with their size. 256 + 257 + The output must look like: 258 + 259 + /home/cheo/asig.cpp 30 bytes 260 + /home/hola.txt 200 bytes 261 + /home/saludos.dat 2000 bytes 262 + 263 + The list client must be run: 264 + 265 + python ls.py \<server\>:\<port, default=8000\> 266 + 267 + Where server is the metadata server IP and port is the metadata server port. If 268 + the default port is not indicated the default port is 8000 and no ':' character 269 + is necessary. 270 + 271 + ### **The copy client** 272 + 273 + The copy client is more complicated than the list client. It is in charge of 274 + copying the files from and to the DFS. 275 + 276 + The copy client must: 277 + 278 + 1. Write files in the DFS 279 + * The client must send to the metadata server the file name and size of the 280 + file to write. 281 + * Wait for the metadata server response with the list of available data 282 + nodes. 283 + * Send the data blocks to each data node. 284 + * You may decide to divide the file over the number of data servers. 285 + * You may divide the file into X size blocks and send it to the data 286 + servers in round robin. 287 + 2. Read files from the DFS 288 + * Contact the metadata server with the file name to read. 289 + * Wait for the block list with the bloc id and data server information 290 + * Retrieve the file blocks from the data servers. 291 + * This part will depend on the division algorithm used in step (1). 292 + 293 + The copy client must be run: 294 + 295 + Copy from DFS: 296 + 297 + python copy.py \<server\>:\<port\>:\<dfs file path\> \<destination file\> 298 + 299 + To DFS: 300 + 301 + python copy.py \<source file\> \<server\>:\<port\>:\<dfs file path\> 302 + 303 + Where server is the metadata server IP address, and port is the metadata server 304 + port. 305 + 306 + # Creating an empty database 307 + 308 + The script createdb.py generates an empty database *dfs.db* for the project. 309 + 310 + python createdb.py 311 + 312 + # Deliverables 313 + 314 + * The source code of the programs (well documented) 315 + * A README file with: 316 + * description of the programs, including a brief description of how they 317 + work. 318 + * who helped you or discussed issues with you to finish the program. 319 + * Video description of the project with implementation details. Any doubt 320 + please consult the professor. 321 + 322 + # Rubric 323 + 324 + * (10 pts) the programs run 325 + * (80 pts) quality of the working solutions 326 + * (20 pts) Metadata server implemented correctly 327 + * (25 pts) Data server implemented correctly 328 + * (10 pts) List client implemented correctly 329 + * (25 pts) Copy client implemented correctly 330 + * (10 pts) quality of the README 331 + * (10 pts) description of the programs with their description. 332 + * No project will be graded without submission of the video explaining how the 333 + project was implemented.
+183
src/copy.c
··· 1 + #include <stdio.h> 2 + #include <assert.h> 3 + #include <stdlib.h> 4 + #include <string.h> 5 + #include "lib/lib.h" 6 + #define LIST_IMPLEMENTATION 7 + #include "lib/list.h" 8 + #define BUF_LEN 0x100 9 + 10 + typedef const char* Str; 11 + 12 + /* ----- Memory ----- */ 13 + 14 + void* copy(void* src, size_t len) { 15 + void* buf = calloc(sizeof(void), len); 16 + exists(buf); 17 + memcpy(buf, src, len); 18 + return buf; 19 + } 20 + 21 + /* ----- List String ----- */ 22 + 23 + DefList(Str); 24 + 25 + ListStr split(Str str, char sep) { 26 + ListStr out = NULL; 27 + char current[BUF_LEN] = {0}; 28 + size_t len = 0; 29 + for (size_t i = 0; i < strlen(str); ++i) { 30 + if (str[i] == sep) { 31 + out = ListStr_cons(copy(current, len), out); 32 + memset(current, 0, len); 33 + len = 0; 34 + } else { 35 + current[len++] = str[i]; 36 + } 37 + } 38 + out = ListStr_cons(copy(current, len), out); 39 + 40 + return ListStr_reverse(out); 41 + } 42 + 43 + void ListStr_print(ListStr list) { 44 + if (list != NULL) { 45 + printf("%s,", list->head); 46 + ListStr_print(list->rest); 47 + } else { 48 + printf("]\n"); 49 + } 50 + } 51 + /* ----- UUID ----- */ 52 + 53 + typedef struct { 54 + uint32_t id1; 55 + uint16_t id2; 56 + uint16_t id3; 57 + uint16_t id4; 58 + uint64_t id5 : 48; 59 + } UUID; 60 + 61 + uint64_t from_hex(Str str, size_t len) { 62 + uint64_t acc = 0; 63 + for (int i = 0; i < len; ++i) { 64 + char c = str[i]; 65 + if ((c <= '9' && c >= '0')) { 66 + acc = acc * 16 + (c - '0'); 67 + } else if ((c <= 'F' && c >= 'A')) { 68 + acc = acc * 16 + (c - 'A') + 10; 69 + } else if ((c <= 'f' && c >= 'a')) { 70 + acc = acc * 16 + (c - 'a') + 10; 71 + } else { 72 + fprintf(stderr, "ERROR: incorrect hex conversion: %s\n", str); 73 + exit(1); 74 + } 75 + } 76 + return acc; 77 + } 78 + 79 + UUID new_uuid() { 80 + char uuid_str[48] = {0}; 81 + FILE* uuid_file = fopen("/proc/sys/kernel/random/uuid", "r"); 82 + exists(uuid_file); 83 + 84 + try(fgets(uuid_str, 37, uuid_file) == NULL); 85 + printf("uuid_str: %s\n", uuid_str); 86 + 87 + ListStr uuid_list = split(uuid_str, '-'); 88 + exists(uuid_list); 89 + if (ListStr_length(uuid_list) != 5) { 90 + fprintf(stderr, "ERROR: Misread UUID from file.\n"); 91 + exit(1); 92 + } 93 + 94 + Str id1 = ListStr_at(uuid_list, 1); 95 + Str id2 = ListStr_at(uuid_list, 2); 96 + Str id3 = ListStr_at(uuid_list, 3); 97 + Str id4 = ListStr_at(uuid_list, 4); 98 + Str id5 = ListStr_at(uuid_list, 5); 99 + printf("ids: %s-%s-%s-%s-%s\n", id1, id2, id3, id4, id5); 100 + if (strlen(id1) != 8 || strlen(id2) != 4 || strlen(id3) != 4 || strlen(id4) != 4 || strlen(id5) != 12) { 101 + fprintf(stderr, "ERROR: UUID sizes are wrong."); 102 + exit(1); 103 + } 104 + uint32_t id1i = from_hex(id1, strlen(id1)); 105 + uint16_t id2i = from_hex(id2, strlen(id2)); 106 + uint16_t id3i = from_hex(id3, strlen(id3)); 107 + uint16_t id4i = from_hex(id4, strlen(id4)); 108 + uint64_t id5i = from_hex(id5, strlen(id5)); 109 + return (UUID) { 110 + .id1 = id1i, 111 + .id2 = id2i, 112 + .id3 = id3i, 113 + .id4 = id4i, 114 + .id5 = id5i, 115 + }; 116 + } 117 + 118 + typedef struct { 119 + char l; 120 + char r; 121 + } HexByte; 122 + 123 + DefList(HexByte); 124 + 125 + HexByte to_hex1(unsigned char c) { 126 + char memo[16] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F' }; 127 + return (HexByte) { 128 + .r = memo[c & 0xF], 129 + .l = memo[(c / 16) & 0xF], 130 + }; 131 + } 132 + 133 + ListHexByte to_hex(uint64_t n) { 134 + ListHexByte out = NULL; 135 + while (n != 0) { 136 + unsigned char c = n & 0xFF; 137 + n >>= 8; 138 + out = ListHexByte_cons(to_hex1(c), out); 139 + } 140 + return (out); 141 + } 142 + 143 + void ListHexByte_print(ListHexByte list, char end) { 144 + if (list != NULL) { 145 + printf("%c%c", list->head.l, list->head.r); 146 + ListHexByte_print(list->rest, end); 147 + } else { 148 + printf("%c", end); 149 + } 150 + } 151 + 152 + void print_uuid(UUID uuid) { 153 + ListHexByte_print(to_hex(uuid.id1), '-'); 154 + ListHexByte_print(to_hex(uuid.id2), '-'); 155 + ListHexByte_print(to_hex(uuid.id3), '-'); 156 + ListHexByte_print(to_hex(uuid.id4), '-'); 157 + ListHexByte_print(to_hex(uuid.id5), ' '); 158 + } 159 + 160 + /* ----- Main ----- */ 161 + 162 + int main(int argc, char** argv) { 163 + if (argc != 2) { 164 + fprintf(stderr, "ERROR: You must supply a path\n"); 165 + exit(1); 166 + } 167 + 168 + ListStr list = split(argv[1], ':'); 169 + if (ListStr_length(list) != 3) { 170 + fprintf(stderr, "ERROR: You must supply three things in the path\n"); 171 + exit(1); 172 + } 173 + 174 + print_uuid(new_uuid()); 175 + printf("\n"); 176 + 177 + Str address = ListStr_at(list, 1); 178 + Str port = ListStr_at(list, 2); 179 + Str file = ListStr_at(list, 3); 180 + printf("address: %s\n", address); 181 + printf("port: %s\n", port); 182 + printf("file: %s\n", file); 183 + }
+184
src/lib/enchufe.h
··· 1 + #ifndef ENCHUFE_H_ 2 + #define ENCHUFE_H_ 3 + #include <errno.h> 4 + #include <netinet/in.h> 5 + #include <stdint.h> 6 + #include <stdio.h> 7 + #include <stdlib.h> 8 + #include <string.h> 9 + #include <sys/socket.h> 10 + 11 + // Macro para probar si un numero es negativo. En general, esta libreria 12 + // prefiere crashear el programa que dejar que el usuario arregle un error. 13 + #define try(a) do { \ 14 + if ((a) < 0) { \ 15 + fprintf(stderr, "[ERROR]: %s:%d %s\n", __FILE__, __LINE__, strerror(errno)); \ 16 + exit (EXIT_FAILURE); \ 17 + } \ 18 + } while(0) 19 + 20 + typedef int FD; // short for FileDescriptor 21 + typedef uint16_t Port; 22 + typedef uint8_t Byte; 23 + 24 + // Buffer de bytes, lo puedes usar para lo que sea. 25 + typedef struct { 26 + size_t len; 27 + Byte* buf; 28 + } Buffer; 29 + 30 + // Convierte un string de C a un buffer de bytes. 31 + Buffer atob(const char* str); 32 + 33 + 34 + // Tipo para un IPv4. Ademas te ayuda convertir entre little endian y big 35 + // endian. La data de bytes aparece como bytes[3]bytes[2]bytes[1]bytes[0] en 36 + // memoria. 37 + typedef union { 38 + Byte bytes[4]; 39 + uint32_t ip; 40 + } IPv4; 41 + 42 + // Enchufe. 43 + typedef struct { 44 + FD fd; 45 + struct sockaddr_in addr; 46 + socklen_t addrlen; 47 + } Enchufe; 48 + 49 + // Receptaculo. 50 + typedef struct { 51 + struct sockaddr_in addr; 52 + socklen_t addrlen; 53 + } Receptaculo; 54 + 55 + // Crea un file descriptor nuevo para un enchufe. 56 + inline FD nuevo() { 57 + FD fd = socket(PF_INET, SOCK_STREAM, 0); 58 + try (fd); 59 + return fd; 60 + } 61 + 62 + // Crea un receptaculo. 63 + inline Receptaculo receptaculo(IPv4 ip, Port port) { 64 + struct sockaddr_in name = { 65 + .sin_family = AF_INET, 66 + .sin_port = port, 67 + .sin_addr = { 68 + .s_addr = ip.ip, 69 + }, 70 + }; 71 + return (Receptaculo){ 72 + .addr = name, 73 + .addrlen = sizeof(name), 74 + }; 75 + } 76 + 77 + // Coge un file descriptor y un receptaculo y los junta. En otras palabras, los 78 + // aplasta. Un enchufe es basicamente, un file descriptor con un IP. 79 + inline Enchufe aplasta(FD fd, Receptaculo rec) { 80 + return (Enchufe){ 81 + .fd = fd, 82 + .addr = rec.addr, 83 + .addrlen = rec.addrlen, 84 + }; 85 + } 86 + 87 + // Esta funcion te crea un enchufe. 88 + Enchufe enchufa(IPv4 ip, Port port); 89 + 90 + // Esta funcion crea la conexion desde tu computadora hasta donde sea que este 91 + // el enchufe. 92 + void conecta(Enchufe enchufe); 93 + 94 + // Esta funcion amarra la direccion de IP que se le dio al enchufe, al file 95 + // descriptor. Hay casos donde no vas a querer que esten amarrados, como cuando 96 + // no te importa la direccion que tendra un cliente conectandose a un servidor, 97 + // por eso el default es que la funcion enchufa(ip, port) no amarre el file 98 + // descriptor al puerto. 99 + void amarra(Enchufe enchufe); 100 + 101 + // Le deja saber al enchufe cuantas conexiones se pueden hacer. El default es 102 + // que no se puedan hacer conexiones. Asi que si estas codificando un servidor, 103 + // tienes que llamar esta funcion. 104 + void escucha(Enchufe enchufe, size_t len); 105 + 106 + // Esta funcion bloquea el thread hasta que un cliente se conecte. Devuelve el 107 + // enchufe del cliente para poder comunicarse con el cliente. Tienes que 108 + // desenchufarlo cuando termines la direccion. 109 + Enchufe acepta(Enchufe enchufe); 110 + 111 + // Envia un buffer de bytes a un cliente. 112 + void zumba(Enchufe enchufe, Buffer in_buf); 113 + 114 + // Recibe un buffer de bytes de un cliente. Devuelve la cantidad de bytes que se 115 + // leyeron. Si devuelve 0, entonces el cliente cerro la conexion. 116 + size_t recibe(Enchufe enchufe, Buffer out_buf); 117 + 118 + // Esta funcion se encarga de liberar los recursos que ocupan los echufes. 119 + void desenchufa(Enchufe enchufe); 120 + 121 + #ifdef ENCHUFE_IMPLEMENTATION 122 + #include <unistd.h> // read, close and other POSIX functions 123 + #include <sys/socket.h> // all the socket functions 124 + #include <netinet/in.h> // sockaddr_in 125 + #include <arpa/inet.h> // inet_pton 126 + 127 + // Esta funcion llama a tres otras funciones inline. 128 + Enchufe enchufa(IPv4 ip, Port port) { 129 + return aplasta(nuevo(), receptaculo(ip, port)); 130 + } 131 + 132 + // Wrapper para connect. 133 + void conecta(Enchufe enchufe) { 134 + try (connect(enchufe.fd, (const struct sockaddr*)&enchufe.addr, enchufe.addrlen)); 135 + } 136 + 137 + // Wrapper para bind. 138 + void amarra(Enchufe enchufe) { 139 + try (bind(enchufe.fd, (struct sockaddr*)&enchufe.addr, enchufe.addrlen)); 140 + } 141 + 142 + // Wrapper para liste. 143 + void escucha(Enchufe enchufe, size_t len) { 144 + listen(enchufe.fd, (int)len); 145 + } 146 + 147 + // Wrapper para acepta. 148 + Enchufe acepta(Enchufe enchufe) { 149 + FD fd = accept(enchufe.fd, (struct sockaddr*)&enchufe.addr, &enchufe.addrlen); 150 + try (fd); 151 + return (Enchufe){ 152 + .fd = fd, 153 + .addr = enchufe.addr, 154 + .addrlen = enchufe.addrlen, 155 + }; 156 + } 157 + 158 + // Wrapper para zumba. 159 + void zumba(Enchufe enchufe, Buffer buf) { 160 + try (write(enchufe.fd, buf.buf, buf.len)); 161 + } 162 + 163 + // Wrapper para recibe. 164 + size_t recibe(Enchufe enchufe, Buffer buf) { 165 + int64_t bytes_read = read(enchufe.fd, buf.buf, buf.len); 166 + try (bytes_read); 167 + return (size_t)bytes_read; 168 + } 169 + 170 + // Wrapper para close. 171 + void desenchufa(Enchufe enchufe) { 172 + close(enchufe.fd); 173 + } 174 + 175 + // Esta funcion convierte un string a un buffer. 176 + Buffer atob(const char* str) { 177 + return (Buffer){ 178 + .buf = (Byte*)str, 179 + .len = strlen(str), 180 + }; 181 + } 182 + #endif 183 + 184 + #endif // ENCHUFE_H_ header
+184
src/lib/lib.h
··· 1 + #ifndef LIB_H_ 2 + #define LIB_H_ 3 + #include "enchufe.h" 4 + #include <assert.h> 5 + 6 + // Macro para detectar si un pointer es NULL. Usa esta funcion si prefieres 7 + // crashear el programa cuando encuetras un puntero NULL. 8 + #define exists(a) do { \ 9 + if ((a) == NULL) { \ 10 + fprintf(stderr, "[ERROR]: %s:%d Null pointer encountered, %s\n", __FILE__, __LINE__, strerror(errno)); \ 11 + exit (EXIT_FAILURE); \ 12 + } \ 13 + } while(0) 14 + 15 + // Esto es necesario por si prefieres que Proc aguante una unidad de tiempo 16 + // distinta. 17 + typedef Byte Time; 18 + 19 + // Esto sera lo que se envia y recibe por el socket. 20 + typedef struct { 21 + Time time; 22 + Buffer program; 23 + } Proc; 24 + 25 + // Esto es para crear una lista dinamica de Procs. 26 + typedef struct { 27 + Proc* procs; 28 + size_t len; 29 + } Procs; 30 + 31 + void* copy(void* src, size_t nbytes); 32 + 33 + // Esta funcion convierte un buffer en una lista de Proc's. 34 + Procs deserialize(Buffer out_buf, size_t msg_len); 35 + 36 + // Esta funcion convierte un Proc en un buffer. 37 + Buffer serialize(Proc); 38 + 39 + // Esta funcion convierte un string que representa un IPv4 en un IPv4. 40 + IPv4 parse_address(const char* str); 41 + 42 + // Esta funcion verifica que el string enviado por el socket sea valido. 43 + Buffer validate_str(Buffer str, size_t max_len); 44 + 45 + // Esta funcion crea una copia de un buffer en memoria y lo devuelve. 46 + Buffer bufcpy(Buffer in); 47 + 48 + // Esta funcion es mejor que strlen. 49 + size_t safe_strlen(const char* str, size_t max_len); 50 + 51 + #ifdef LIB_IMPLEMENTATION 52 + #include "log.h" 53 + #include <stdlib.h> 54 + #include <string.h> 55 + 56 + void* copy(void* src, size_t nbytes) { 57 + void* out = malloc(sizeof(void) * nbytes); 58 + memcpy(out, src, nbytes); 59 + return out; 60 + } 61 + 62 + // Allocates new memory from src and returns a buffer to that memory. The user 63 + // must free that memory. 64 + Buffer bufcpy(Buffer src) { 65 + Byte* buf = (Byte*)malloc(src.len * sizeof(Byte)); 66 + exists(buf); 67 + memcpy(buf, src.buf, src.len); 68 + return (Buffer){ 69 + .buf = buf, 70 + .len = src.len, 71 + }; 72 + } 73 + 74 + // Checks whether the buffer contains a valid string and that the size provided 75 + // matches the size of that string. 76 + Buffer validate_str(Buffer str, size_t max_len) { 77 + size_t calculated_len = safe_strlen((const char*)str.buf, max_len); 78 + if (str.len != calculated_len) { 79 + log(ERROR, "%s:%d String's length (%zu) is not equal to given length (%zu).", __FILE__, __LINE__, calculated_len, str.len); 80 + 81 + printf("\nBuffer contains: "); 82 + for (size_t i = 0; i < max_len; ++i) printf("[%d] ", str.buf[i]); 83 + printf("\n"); 84 + 85 + exit(1); 86 + } 87 + if (str.len > max_len) { 88 + log(ERROR, "%s:%d String's length (%zu) is larger than the buffer that contains it (%zu).\n", str.len, calculated_len); 89 + 90 + printf("\nBuffer contains: "); 91 + for (size_t i = 0; i < max_len; ++i) printf("[%d] ", str.buf[i]); 92 + printf("\n"); 93 + 94 + exit(1); 95 + } 96 + return str; 97 + } 98 + 99 + // Converts the buffer received from a socket into an array of processes. The 100 + // user must free this memory. 101 + Procs deserialize(Buffer out_buf, size_t msg_len) { 102 + Procs procs = { 103 + .procs = (Proc*)calloc(1, sizeof(Proc)), 104 + .len = 1, 105 + }; 106 + exists(procs.procs); 107 + 108 + // This loop will continue until all Proc's have been deserialized. 109 + size_t buf_idx = 0; 110 + for (size_t j = 0; buf_idx < msg_len; ++j) { 111 + // Reallocate new Proc if more than one proc was received. 112 + if (j == procs.len) { 113 + procs.procs = (Proc*)realloc(procs.procs, procs.len + 1); 114 + procs.len = procs.len + 1; 115 + } 116 + 117 + // curr represents the first byte where the Proc lives in the message. 118 + Byte* curr = out_buf.buf + buf_idx; 119 + 120 + // Parse out the data members. 121 + Time time = *(Time*)curr; 122 + size_t len = *(size_t*)(curr + sizeof(Time)); 123 + Byte* str_buf = curr + sizeof(Time) + sizeof(size_t); 124 + 125 + // Validate the string in the buffer. 126 + Buffer program = validate_str((Buffer){.len = len, .buf = str_buf}, msg_len - buf_idx); 127 + 128 + // Insert everything into the proc list. 129 + procs.procs[j] = (Proc){ 130 + .time = time, 131 + .program = bufcpy(program), 132 + }; 133 + 134 + // This calculates where the next Proc will be. 135 + buf_idx += sizeof(Time) + sizeof(size_t) + program.len + 1; 136 + } 137 + return procs; 138 + } 139 + 140 + // This function turns a proc into a buffer. In order to do that, this function 141 + // reinterprets everything on the proc as a sequence of bytes. 142 + Buffer serialize(Proc proc) { 143 + // First, determine how long the buffer has to be. 144 + size_t len = sizeof(Time) + sizeof(size_t) + proc.program.len + 1; 145 + 146 + // Allocate the bytes in the buffer. 147 + Buffer buf = { 148 + .len = len, 149 + .buf = (Byte*)calloc(len, sizeof(Byte)), 150 + }; 151 + exists(buf.buf); 152 + 153 + // Copy everything in the buffer. 154 + memcpy((void*)buf.buf, (void*)&proc.time, sizeof(Time)); 155 + memcpy((void*)(buf.buf + sizeof(Time)), (void*)&proc.program.len, sizeof(size_t)); 156 + memcpy((void*)(buf.buf + sizeof(Time) + sizeof(size_t)), (void*)proc.program.buf, proc.program.len); 157 + return buf; 158 + } 159 + 160 + // This function takes a string representing an IPv4 address and converts it 161 + // into an IPv4 type. 162 + IPv4 parse_address(const char* str) { 163 + size_t len = safe_strlen(str, 15); 164 + 165 + IPv4 ip = {0}; 166 + size_t curr_byte = 0; 167 + for (size_t i = 0; i < len; ++i) { 168 + if (str[i] == '.') { 169 + ++curr_byte; 170 + } else { 171 + ip.bytes[curr_byte] = (Byte)(ip.bytes[curr_byte] * 10 + (str[i] - '0')); 172 + } 173 + } 174 + 175 + return ip; 176 + } 177 + 178 + // uses memchr to calculate strlen. 179 + size_t safe_strlen(const char* str, size_t max_len) { 180 + return (size_t)memchr(str, '\0', max_len) - (size_t)str; 181 + } 182 + #endif 183 + 184 + #endif // LIB_H_
+75
src/lib/list.h
··· 1 + #ifndef LIST_H_ 2 + #define LIST_H_ 3 + /* ----- List definition ----- */ 4 + #ifndef LIST_IMPLEMENTATION 5 + #define DefList(type) \ 6 + struct Node##type { \ 7 + type head; \ 8 + struct Node##type* rest; \ 9 + }; \ 10 + typedef struct Node##type* List##type; \ 11 + void List##type##_deinit(List##type list); \ 12 + type List##type##_car(struct Node##type node); \ 13 + struct Node##type* List##type##_cdr(struct Node##type node); \ 14 + List##type List##type##_cons(type a, List##type list); \ 15 + List##type List##type##_rev(List##type list, List##type a); \ 16 + List##type List##type##_reverse(List##type list); \ 17 + size_t List##type##_length(List##type list); \ 18 + type List##type##_at(List##type list, size_t index); 19 + #else 20 + #define DefList(type) \ 21 + struct Node##type { \ 22 + type head; \ 23 + struct Node##type* rest; \ 24 + }; \ 25 + typedef struct Node##type* List##type; \ 26 + \ 27 + void List##type##_deinit(List##type list) { \ 28 + if (list != NULL) { \ 29 + List##type##_deinit(list->rest); \ 30 + assert(list->rest == NULL); \ 31 + free(list); \ 32 + list = NULL; \ 33 + } \ 34 + } \ 35 + \ 36 + type List##type##_car(struct Node##type node) { \ 37 + List##type##_deinit(node.rest); \ 38 + return node.head; \ 39 + } \ 40 + \ 41 + struct Node##type* List##type##_cdr(struct Node##type node) { \ 42 + return node.rest; \ 43 + } \ 44 + \ 45 + List##type List##type##_cons(type a, List##type list) { \ 46 + struct Node##type* node = (struct Node##type*)calloc(sizeof(struct Node##type), 1); \ 47 + exists(node); \ 48 + *node = (struct Node##type){ .head = a, .rest = list }; \ 49 + return (List##type)node; \ 50 + } \ 51 + \ 52 + List##type List##type##_rev(List##type list, List##type a) { \ 53 + if (list != NULL) { \ 54 + List##type out = List##type##_rev(list->rest, List##type##_cons(list->head, a)); \ 55 + free(list); \ 56 + return out; \ 57 + } else return a; \ 58 + } \ 59 + \ 60 + List##type List##type##_reverse(List##type list) { \ 61 + return List##type##_rev(list, NULL); \ 62 + } \ 63 + \ 64 + size_t List##type##_length(List##type list) { \ 65 + if (list != NULL) return List##type##_length(list->rest) + 1; \ 66 + else return 0; \ 67 + } \ 68 + \ 69 + type List##type##_at(List##type list, size_t index) { \ 70 + if (list != NULL) return index == 1 ? list->head : List##type##_at(list->rest, index - 1); \ 71 + else exists(NULL); \ 72 + return (type){0}; \ 73 + } 74 + #endif 75 + #endif // LIST_H_
+44
src/lib/log.h
··· 1 + #ifndef LOG_H_ 2 + #define LOG_H_ 3 + 4 + // ¿Cuál log prefieres hacer? 5 + typedef enum { 6 + INFO, 7 + WARN, 8 + ERROR, 9 + } LogLevel; 10 + 11 + // log 12 + void log(LogLevel level, const char* format, ...); 13 + 14 + #ifdef LOG_IMPLEMENTATION 15 + #include <stdio.h> 16 + #include <stdarg.h> 17 + #include "log.h" 18 + 19 + // LOGGGG 20 + void log(LogLevel level, const char* format, ...) { 21 + FILE* out = stdout; 22 + 23 + switch (level) { 24 + case WARN: { 25 + fprintf(out, "[WARN]: "); 26 + } break; 27 + case ERROR: { 28 + out = stderr; 29 + fprintf(out, "[ERROR]: "); 30 + } break; 31 + default: 32 + case INFO: { 33 + fprintf(out, "[INFO]: "); 34 + } break; 35 + }; 36 + 37 + va_list list; 38 + va_start(list, format); 39 + vfprintf(out, format, list); 40 + va_end(list); 41 + } 42 + #endif 43 + 44 + #endif // LOG_H_ header