Learning Rust and Lemmy
Welcome
A collaborative space for people to work together on learning Rust, learning about the Lemmy code base, discussing whatever confusions or difficulties we're having in these endeavours, and solving problems, including, hopefully, some contributions back to the Lemmy code base.
Rules TL;DR: Be nice, constructive, and focus on learning and working together on understanding Rust and Lemmy.
Running Projects
- Rust for Lemmings Reading Club (portal)
- Rust beginners challenges (portal)
- Heroically Helpful Comments
Policies and Purposes
- This is a place to learn and work together.
- Questions and curiosity is welcome and encouraged.
- This isn't a technical support community. Those with technical knowledge and experienced aren't obliged to help, though such is very welcome. This is closer to a library of study groups than stackoverflow. Though, forming a repository of useful information would be a good side effect.
- This isn't an issue tracker for Lemmy (or Rust) or a place for suggestions. Instead, it's where the nature of an issue, what possible solutions might exist and how they could be or were implemented can be discussed, or, where the means by which a particular suggestion could be implemented is discussed.
See also:
Rules
- Lemmy.ml rule 2 applies strongly: "Be respectful, even when disagreeing. Everyone should feel welcome" (see Dessalines's post). This is a constructive space.
- Don't demean, intimidate or do anything that isn't constructive and encouraging to anyone trying to learn or understand. People should feel free to ask questions, be curious, and fill their gaps knowledge and understanding.
- Posts and comments should be (more or less) within scope (on which see Policies and Purposes above).
- See the Lemmy Code of Conduct
- Where applicable, rules should be interpreted in light of the Policies and Purposes.
Relevant links and Related Communities
- Lemmy Organisation on GitHub
- Lemmy Documentation
- General Lemmy Discussion Community
- Lemmy Support Community
- Rust Community on lemmy.ml
- Rust Community on programming.dev
Thumbnail and banner generated by ChatGPT.
Please provide any feedback or thoughts on the approach and the posted code below!
My Approach
So my approach was a straight forward approach (no real algorithm) that was implemented basically as a state machine. I new ahead of time that using match statements with rust was likely a good approach so I first proto-typed my approach in python and debugged a bit there before implementing the idea in rust.
My approach was basically to go through both files simultaneously line-by-line , looking for where lines matched or differed. When lines matched/differed in the same way, they were grouped into "chunks", each of a specific kind depending on the nature of the matching/differing. They key piece to sort out was how to detect the various ways in which the two files could differ from each other (eg, addition, deletion, modification).
I settled on this idea which seems to work: A particular kind of changed-chunk is identified by (and ends with) how its final line matches with one of the initial lines of the changed-chunk. The diagram below illustrates.
Beyond this the rest was sorting out the various states the process can enter and how to move between them.
Linked Lines are found to match or be identical
Addition:
A B
- -
|-| <-\ |-| <--- highlighted lines = current chunk of lines identified as changed
|-| \ |-|
- \--> - <--- Current line
Deletion:
- -
|-| /-> |-|
|-| / |-|
- <-/ -
Modification:
- -
|-| |-|
|-| |-|
- <------> -
Rust programming
As for the details of using rust for this, I stumbled on a state machine pattern:
let mut state = State::Start;
loop {
state = match state {
...
State::Finished => break
}
}
... which I found pretty nice.
I also found using an enum for the state variable along with associating data with each state a useful, flexible and elegant way to go. The program basically becomes just a single enum and the looping pattern above, along with a few data structures and some methods. Is this normal/idiomatic for rust??
The only hiccough I ran into, apart from not knowing how to do things like read standard input or a file was thinking I could trivially match on two variables at a time. I wrote about that recently here. Once the compiler got angry at me about that I got worried that I didn't know what I was doing at all, but AFAICT, it was a silly idea in the first place and best avoided for the sort of the thing I was trying to do.
Another minor issue I ran into was that I was tracking line numbers throughout the program, which were used as indices into a vector of strings, and I both wanted to do arithmetic on these numbers but also compare to the lengths of the vectors of lines. I settled on sticking with usize
types for all such numbers, which required some conversions ... but writing this now I think I was just scared and should have just converted and stored the vector lengths to a default i32 from the beginning and made things easier ... oh well.
I'll post the code in a separate comment below to keep things clean.
I also found using an enum for the state variable along with associating data with each state a useful, flexible and elegant way to go. The program basically becomes just a single enum and the looping pattern above, along with a few data structures and some methods. Is this normal/idiomatic for rust??
In my experience, this is the sweet spot for Rust programming. If you can formulate your approach as a state machine like this, then you almost always should - especially when writing in Rust.
The only times I'd pass on the pattern is if stuffing all of the ambiant state of a problem into different states of a state machine just to make the giant loop work correctly introduces too much cognitive complexity (aka makes the code too hard to read / follow).
Thanks for the feedback! What you say tracks exactly with my feelings after writing this.
I personally did encounter enough issues with the borrow checker while writing this, largely because I'm not on top of ownership and language enough, that I'm still somewhat wary of moving off of clean data flow patterns like this state machine one.
Maybe I should try re-writing it using "dumb" globals and a procedural/imperative style and see how I go?
What would you be reaching for, in terms of patterns/approaches, when you know your data flow is too messy for a state machine?
What would you be reaching for, in terms of patterns/approaches, when you know your data flow is too messy for a state machine?
Bare if
and loop
expressions. Basically what you described as "dumb globals and a procedural/impérative style". Of course, defining some custom types is almost always useful if not needed for "properly" handling ownership.
Cheers!
Code
use std::env;
use std::fs::read_to_string;
// > Data Types
#[derive(Debug)]
struct FileLens {
a: usize,
b: usize
}
impl FileLens {
fn new(a_lines: &Vec<&str>, b_lines: &Vec<&str>) -> FileLens {
FileLens{
a: a_lines.len(),
b: b_lines.len()
}
}
}
#[derive(Debug)]
struct Cursors {
a: usize,
b: usize,
}
impl Cursors {
fn increment_cursor(&self, chunk: Option<&Chunk>) -> Cursors {
let (a_inc, b_inc) = match chunk {
Some(chnk) => {
match chnk.chunk_type {
ChunkType::Addition => (
-1 * (i32::try_from(chnk.b_lines.len())
.expect("Lines vector too big for i32!")),
0,
),
ChunkType::Deletion => (
0,
-1 * (i32::try_from(chnk.a_lines.len())
.expect("Lines vector too big for i32!")),
),
ChunkType::Match | ChunkType::Modification => (1, 1),
}
}
None => (1, 1),
};
let new_a_cursor = (i32::try_from(self.a).expect("opps")) + a_inc;
let new_b_cursor = (i32::try_from(self.b).expect("opps")) + b_inc;
// negative cursors should be captured here (despite bad error msg)
Cursors {
a: usize::try_from(new_a_cursor).expect("oops"),
b: usize::try_from(new_b_cursor).expect("oops"),
}
}
fn clone(&self) -> Cursors {
Cursors{a: self.a, b: self.b}
}
}
#[derive(Debug)]
enum ChunkType {
Match,
Addition,
Deletion,
Modification,
}
#[derive(Debug)]
struct Chunk {
chunk_type: ChunkType,
a_lines: Vec<String>,
b_lines: Vec<String>,
cursor: Cursors,
}
#[derive(Debug)]
#[derive(Clone)]
enum FileSpec {
A,
B,
AB
}
#[derive(Debug)]
enum CompType {
Match,
Change,
FileEnd(FileSpec)
}
// No Union for LineComp and FileSpec, so fold together
#[derive(Debug)]
struct LineComp {
// don't store the string, as copying/borrowing gets involved, instead the cursor
// ... we can get all the strings at the end when necessary
cursor: Cursors,
kind: CompType
}
// > States
#[derive(Debug)]
struct ChunkData {
lines: Vec<LineComp>,
cursor: Cursors,
start_cursor: Cursors
}
impl ChunkData {
fn set_cursor(&mut self, cursor: Cursors) {
self.cursor = cursor;
}
}
enum State {
NewChunk {cursor: Cursors},
NewLine {chunk_data: ChunkData},
ContinuingChunk {chunk_data: ChunkData, line_read: LineComp},
ContinuingChangeChunk {chunk_data: ChunkData, line_read: LineComp},
EndingChunk {chunk_data: ChunkData, line_read: LineComp},
EndingChangedChunk {chunk_data: ChunkData, chunk_type: ChunkType},
// hmmm, LineComp type is actually LineComp with FileEnd as kind ... how type?
FileEnd {chunk_data: ChunkData, line_read: FileSpec},
FileEndChange {chunk_data: ChunkData},
End
}
fn read_line(
cursor: &Cursors, file_lens: &FileLens,
a_lines: &Vec<&str>, b_lines: &Vec<&str>) -> LineComp {
let a_end = ! (cursor.a < file_lens.a);
let b_end = ! (cursor.b < file_lens.b);
match (a_end, b_end) {
(false, false) => {
if a_lines[cursor.a] == b_lines[cursor.b] {
LineComp{cursor: cursor.clone(), kind: CompType::Match}
}
else {
LineComp{cursor: cursor.clone(), kind: CompType::Change}
}
},
(false, true) => LineComp{cursor: cursor.clone(), kind: CompType::FileEnd(FileSpec::B)},
(true, false) => LineComp{cursor: cursor.clone(), kind: CompType::FileEnd(FileSpec::A)},
(true, true) => LineComp{cursor: cursor.clone(), kind: CompType::FileEnd(FileSpec::AB)},
}
}
...
... continued
fn main() {
// > Getting args
let args: Vec<String> = env::args().collect();
if args[1..].len() != 2 {
panic!(
"Must provide two paths. Instead provided {:}",
args[1..].len()
);
}
println!("Args:");
for a in args[1..].iter() {
println!("{}", a);
}
// > Reading files and splitting into lines
let a = read_to_string(&args[1]).expect("Failed to read file");
let b = read_to_string(&args[2]).expect("Failed to read file");
let a_lines: Vec<&str> = a.split("\n").collect();
let b_lines: Vec<&str> = b.split("\n").collect();
// > Initialising globals
let file_lengths = FileLens::new(&a_lines, &b_lines);
let cursor = Cursors { a: 0, b: 0 };
let mut chunks: Vec<Chunk> = vec![];
// mut necessary as overwriting in each loop
let mut state: State = State::NewChunk {cursor};
// > Loop
loop {
state = match state {
State::NewChunk { cursor } => State::NewLine {
chunk_data: ChunkData{
lines: Vec::new(), cursor: cursor.clone(), start_cursor: cursor}
},
State::NewLine { chunk_data } => {
let line_read = read_line(&chunk_data.cursor, &file_lengths, &a_lines, &b_lines);
match chunk_data.lines.as_slice() {
[] => {
match line_read.kind {
CompType::Match => State::ContinuingChunk {
chunk_data, line_read },
CompType::Change => State::ContinuingChunk {
chunk_data, line_read },
CompType::FileEnd(file_spec) => State::FileEnd {
chunk_data, line_read: file_spec},
}
},
[.., lc] => {
match lc.kind {
CompType::Match => {
match line_read.kind {
CompType::Match => State::ContinuingChunk {
chunk_data, line_read },
CompType::Change => State::EndingChunk {
chunk_data, line_read },
CompType::FileEnd(file_spec) => State::FileEnd {
chunk_data, line_read: file_spec},
}
}
CompType::Change => {
match line_read.kind {
CompType::Match => State::EndingChunk {
chunk_data, line_read },
CompType::Change => State::ContinuingChangeChunk {
chunk_data, line_read },
CompType::FileEnd(_) => State::FileEndChange {
chunk_data},
}
}
CompType::FileEnd(_) => panic!(
// error! should not have come here from FileEnd
"Failed to process file end correctly (failed at lines a:{},b:{})",
line_read.cursor.a, line_read.cursor.b),
}
}
}
},
State::ContinuingChunk { mut chunk_data, line_read } => {
chunk_data.lines.push(line_read);
let new_cursor = chunk_data.cursor.increment_cursor(None);
chunk_data.set_cursor(new_cursor);
State::NewLine{chunk_data}
},
State::ContinuingChangeChunk { chunk_data, line_read } => {
let first_lc = chunk_data.lines.first().unwrap();
if a_lines[first_lc.cursor.a] == b_lines[line_read.cursor.b] {
State::EndingChangedChunk{
chunk_data, chunk_type: ChunkType::Addition
}
}
else if a_lines[line_read.cursor.a] == b_lines[first_lc.cursor.b] {
State::EndingChangedChunk{
chunk_data, chunk_type: ChunkType::Deletion
}
}
else {
State::ContinuingChunk { chunk_data, line_read }
}
},
State::EndingChunk { chunk_data, line_read } => {
let chunk_type = match line_read.kind {
CompType::Match => ChunkType::Modification,
CompType::Change => ChunkType::Match,
CompType::FileEnd(_) => panic!(
// error! should not have come here from FileEnd
"Failed to process file end correctly (failed at lines a:{},b:{})",
line_read.cursor.a, line_read.cursor.b
)
};
let new_a_lines: Vec<String> = chunk_data.lines.iter()
.map(|lc| String::from(a_lines[lc.cursor.a]))
.collect();
let new_b_lines: Vec<String> = chunk_data.lines.iter()
.map(|lc| String::from(b_lines[lc.cursor.b]))
.collect();
chunks.push(
Chunk{
chunk_type,
a_lines: new_a_lines,
b_lines: new_b_lines,
cursor: chunk_data.start_cursor,
}
);
// continue from last read line, but with a new chunk
// ... repetitive yes, but cleaner code I think
State::NewChunk { cursor: line_read.cursor }
},
State::EndingChangedChunk { chunk_data, chunk_type } => {
let new_a_lines: Vec<String> = chunk_data.lines.iter()
.map(|lc| String::from(a_lines[lc.cursor.a]))
.collect();
let new_b_lines: Vec<String> = chunk_data.lines.iter()
.map(|lc| String::from(b_lines[lc.cursor.b]))
.collect();
chunks.push(
Chunk{
chunk_type,
a_lines: new_a_lines,
b_lines: new_b_lines,
cursor: chunk_data.start_cursor,
}
);
let new_cursor = chunk_data.cursor.increment_cursor(Some(chunks.last().unwrap()));
State::NewChunk { cursor: new_cursor }
},
State::FileEnd { chunk_data, line_read} => {
match line_read {
FileSpec::A => {
chunks.push(
Chunk{
chunk_type: ChunkType::Addition,
a_lines: vec![],
b_lines: b_lines[chunk_data.cursor.b..].iter()
.map(|s| String::from(*s)).collect(),
cursor: chunk_data.cursor,
}
);
State::End
},
FileSpec::B => {
chunks.push(
Chunk{
chunk_type: ChunkType::Deletion,
a_lines: a_lines[chunk_data.cursor.a..].iter()
.map(|s| String::from(*s)).collect(),
b_lines: vec![],
cursor: chunk_data.cursor,
}
);
State::End
},
FileSpec::AB => State::End,
}
},
State::FileEndChange { chunk_data } => {
let a_cursor = chunk_data.start_cursor.a.clone();
let b_cursor = chunk_data.start_cursor.b.clone();
chunks.push(
Chunk{
chunk_type: ChunkType::Deletion,
a_lines: a_lines[a_cursor..].iter()
.map(|s| String::from(*s)).collect(),
b_lines: vec![],
cursor: chunk_data.start_cursor,
}
);
chunks.push(
Chunk{
chunk_type: ChunkType::Addition,
a_lines: vec![],
b_lines: b_lines[b_cursor..].iter()
.map(|s| String::from(*s)).collect(),
cursor: chunk_data.cursor,
}
);
State::End
},
State::End => break,
};
}
// > Wrap up
println!("Done!");
for (i,c) in chunks.iter().enumerate() {
println!("\n--- Chunk: {} ---", i);
println!("Type: {:?}", c.chunk_type);
println!("A: {:?}", c.a_lines);
println!("B: {:?}", c.b_lines);
}
}
An idea for a future challenge: use actix_web
(for example) to make a web interface for your diff tool.
Notably, this gives you an excuse to try dealing with refactoring existing code into separate modules - albeit this can be greatly trivialized with a sufficiently powerful IDE. I don't know what you've been using so far.
Dealing with file uploads can provide an interesting change over the "classic" todo CRUD as well, if one is tired of that. Not to mention the diff output/result is similarly a bit more interesting data to communicate over http.
This might be more appropriate if attempted once your first 2 challenges listed are tackled (todo web app & JSON/HTML for diff).
An idea for a future challenge: use actix_web (for example) to make a web interface for your diff tool.
Nice idea (I honestly hadn't thought of that)! Honestly kinda keen on this!
Notably, this gives you an excuse to try dealing with refactoring existing code into separate modules - albeit this can be greatly trivialized with a sufficiently powerful IDE. I don’t know what you’ve been using so far.
I'm just using rust-analyzer (inside of Sublime) so nothing really powerful. What are you using/recommending? ... i realise I'm not aware of any "goto" rust IDE. Do IntelliJ have a rust IDE or do people just use CLion?
As of last November, Jetbrains have released into their Early Access Program their new rust-flavored variant of intelliJ, named RustRover.
I have found it very pleasant, from the debugger working as expected to being able to tell it to use cargo clippy
instead of cargo check
for code hints and warnings.
A markdown parser??
This is a great challenge to get into writing parsers, and/or try out the excellent nom
crate.
Thanks! I was just gonna "state machine" it but nom
looks like fun too! Hadn't seen a parsing library/apprach like that before ("combinator"?)
If I'm not mistaken, it's how they teach parsing in schools. But even then, they have you figure out the exact combinators that you need first, on paper, and then implement them more-or-less entirely by hand. So you wouldn't encounter a combinator library such as this one.
I'm aware of few combinator libraries, mostly because they really require functions-as-first-class-citizens to be intended by whatever language you're using in the first place. I wouldn't be surprised if languages like haskel, ocaml, and fsharp have many.