John MacCormick. 2018. What can be computed?
Where was this published?
John MacCormick. 2018. What can be computed?
Where was this published?
Summary of Results
This section needs to reflect on experimental results and tie it back to projects goals. Too short now.
behavior consistent with what is was shown in previous papers
What is this behavior? Summarize here
Test Cases and Coverage
Make this first with its results and then experiments with its results so it flows with better instead of having test cases break between experimental set up and results
Based on these plots of the data collected by PathMaker It can be used to make comparisons between algorithms is there and effective however the ability to generate grids of varying complexity is questionable especially when looking at randomized grids they are either a very low complexity or high complexity and almost no grids were generated
Rephrase - hard to parse
You can see on
Can see where?
Grid Configurations
Call it Experimental Design
This chapter describes your experimental set up and evaluation. It should also produce and describe the results of your study. The section titles below offer a typical st ## Experimental Design
Delete
code
All code snippets need to be explained or at least summarized
iagonal distance
what is d?
two different convolution
Explain the example below in more detail
User Interface
I would move this to be the last section and make Environment and Map Representation with Game board, Tile system Grid structure, Saving/loading (JSON) to go right after System Architecture
History
Not sure this is helpful, instead start with what the system is and what constraints guided design (to flow from related work)
Summary and Motivation for This Tool
Can you add a summary table here? Something like:
| Feature | Visualizers | Game Engines | Robotics Tools | PathMaker |
| ------------- | ----------- | ------------ | -------------- | --------- |
| Visualization | ✓ | ✓ | ✓ | ✓ |
| Benchmarking | ✗ | Partial | ✗ | ✓ |
| Custom Maps | Limited | ✓ | Partial | ✓ |
| Ease of Use | ✓ | ✗ | ✗ | ✓ |
Benchmarking for Dynamic Environments
Strengthen this by specifying what Pathfinder actually does for dynamic environments
PathMaker intends to build
what it does and not what it intends to do
Introduction
After re-reading the Introduction, I'm finding it fragmented and hard to follow. You start with definition, then quickly jump to multiple algorithms, introducing representations (grids, NavMesh, probabilistic maps) mid-flow. “Issues with pathfinding” comes late, even though it should frame the need for PathMaker. And details about PathMaker come after a long technical buildup. I suggest you organize this chapter as follows:
1 Introduction
1.1 Pathfinding: Context and Challenges
1.1.1 Common Pathfinding Approaches
1.1.2 Challenges in Pathfinding
1.2 PathMaker Overview and Contributions
1.3 Design Choices (Brief Overview)
1.4 Limitations of Existing Tools
1.5 Contributions Summary
1.6 Ethical Implications
**What is Pathfinding? ** Keep: Define pathfinding Enhance: Give 2–3 application examples (GPS, games, robotics) Briefly introduce grid-based focus (move this up earlier)
Move out / reduce: Detailed algorithm explanations (A*, Dijkstra) - shorten or defer emphasis Long discussion of representations - keep minimal here
A*, Dijkstra, Weighted Grids - combine & condense to provide minimal technical grounding
Keep very concise summaries (2–4 sentences each) Emphasize differences (optimality, cost handling, use cases) Avoid deep mechanics (no step-by-step descriptions)
Issues with Pathfinding - Move earlier
This is the main motivation Briefly connect to representations (grids vs NavMesh vs probabilistic) Keep probabilistic maps/NavMesh as examples, not deep dives
Project Overview - Move earlier
Refocus this section to explicitly answer:
What is PathMaker? What problem does it solve? Why is it different from existing tools?
Tighten: Avoid repeating motivation language Clearly list capabilities: map creation algorithm execution benchmarking metrics
Implementation Details - Keep (But de-emphasize in intro)
Keep short explanations of Rust + SDL2 Frame as: “lightweight, cross-platform, low-overhead” Avoid deep technical detail (belongs in methods)
Current State of the Art - need to connect to your project to show gaps Structure it as: Visualizers - lack benchmarking Game engines - too complex APIs - too low-level Benchmark libraries - lack usability/integration
End this section with a clear gap statement
Motivation - Connect gap -> need for your tool
Reduce repetition Focus on: - difficulty of evaluating algorithms in practice - need for controlled experimentation
Goals of the Project Convert into a clean list of features: Custom map creation Algorithm implementation support Automated benchmarking Visualization + analysis
Avoid repeating earlier explanations ** Ethical Implications**
The goal of this project is to make a functional and robust tool that as said before allows for the creation of 2D grid-based maps.
The tool is already made
My program aims to be easy to use and give extensive data on the performance of these algorithms by not testing them on a singular map but by giving it multiple different circumstance’s tracking what those differences are and how it performs to help come up with how this an algorithm succeeds and its shortcomings and if it’s a good fit for the problem you’re trying to solve whether it’s finding routes on a real life map or trying to control AI in games to have affective path-finding these require different types algorithms with tradeoffs and my tool hopes to make this process easier.
This sentence is too long
My program
PathMaker
while also providing test maps and varying scenarios to run experiments on[16]. As well as maps designed to test algorithms on such as the Moving AI Repo
What about these?
is
?
algorithms
delete
tool
called PathMaker
This project
PathMaker
The figure above
Figure 2 ...
there
?
a
an
be being
?
the figure above of
Figure 1, where ...
Probabilistic Map Figure [19]
Label all figures. Figure 1: Probabilistic Map [19]
i
It
It does this by finding a path through another path-finding algorithm like Dijkstra or A* Usually using Dijkstra due to its nature of finding multiple paths to different nodes making it more ideal for generating a map
Rephrase - hard to parse
. W
, where
represent
represented
This then quickly becomes a traveling salesmen problem which for a large amount of instances can be computed it is computationally expensive and inefficient
Find a citation for this claim
A* Algorithm
Before jumping into algorithms, give a little preample about commonly used pathfinding algorithms. Maybe make a subsection called "Pathfinding Algorithms" and make each algorithm a subsubsection.
navigation meshes (NavMesh)
add a phrase to explain NavMesh
pchrimary
typo
If these libraries have vulnerabilities, all Pysealer users are exposed. If these libraries have vulnerabilities, all Pysealer users are exposed.
Delete one of them
Because MCP is a newly developed protocol, there are many potential vulnerabilities that have not yet been fully explored or addressed. While some tools have been created specifically to protect MCP systems, Pysealer offers a more general solution by focusing on the integrity of the underlying source code itself. This broader approach helps safeguard against a wide range of attacks, not just those unique to MCP. The importance of protecting MCP and similar systems is underscored by the significant financial impact of cybersecurity breaches. For example, the average cost of a data breach in the United States in 2024 was $9.36 million [2]. As organizations increasingly rely on MCP for critical AI applications, implementing robust security measures like Pysealer becomes essential to prevent costly incidents.
Too short to be its own subsection. Expand or merge with subsequent sections
At a high level, Pysealer introduces a novel approach to version control by enabling code to version control other code.
Is this actually true? In my understanding, Pysealer is not really doing version control in the Git sense
To understand why tools like Pysealer are important, it helps to know what version control is and why it matters in programming.
Let's tighten the paper by avoiding these type of sentences. It should read as a scientific paper, concise and to the point.
m (register() / unregister()), with persistent interaction state stored in Scene properties, user-level configuration stored in add-on preferences, and local conversation history persisted to disk with a temporary-directory fallback
Need to go deeper into these
What does not belong in the Introduction Practical modeling recipes, shading fixes, or operator sequences should not be included here. These belong in Methods or an Appendix, where Suzanne’s generated steps can be presented clearly. The Introduction is focused on background, motivation, problem definition, goals, scope, and ethics.
Delete
A well-researched student project. Like very.
Add an abstract - 250 word summary of the project, including results.
Ethical Implcations
Typo
= TPTP+FN, = FPFP+TN where:
Fix how this appears
, ,
?
Computer Science
Computer and Information
Accuracy metrics
Maybe call it "Evaluation metrics"
F1 Score
Add the formula.
I ensured a controlled sparsity level, making it comparable to real-world scenarios, weighted edges, where interactions had different levels of significance, mode attributes, incorporating demographic information, preferences, and past behaviors.
Give more information, how did you ensure this? Might help to give a snippet of the data to show features and 2-3 rows of values. Also, indicate how big the data is.
ChatGPT 4.0
Are you using ChatGPT or GPT-4?
Using the LLM to grade itself
State this earlier. The formal term for this is "LLM-as-a-Judge"
Linting and Testing
Need to add more information about the test cases.
generative artificial intelligence
Somewhere need to explain what it is. What you are calling "artificial intelligence" here is not what it actually is but is what is known by AI in popular press
artificial intelligence
What is the difference of "artificial intelligence" and "Artificial intelligence"? You are using different capitalizations at different places.
[[29]][48]
Extra [ ]
Overview of Am.I.
Delete
My website has a good LCP scorce on the Homepage
what is a good score?
LCP generally considers tags in HTML like or
?
While the second data set contains most of the same information except for five categories–which are block, atomic weight, specific heat capacity, abundance in the earth’s crust and origin–there are some addition categories that it contains. The first of this additional layer is the atomic mass which as mentioned above has to do with the weight of a single atom. The number of neutrons, protons, and electrons. Protons and neutrons are the elements that make up the nucleus of an atom. The protons are positively charged while the electrons are negatively charged. The electrons orbit the nucleus. There are different methods of explaining the number of electrons. The most common way is that of the Bohr method was discovered in 1913 by a Danish scientist called Niels Bohr. However, though this method is simple and easy to understand it does a bad job of conveying the different orbital patterns of s, p, d and f. It also poorly conveys how electrons move. The Bohr models would lead one to assume that the electrons only move in a circle while the different orbitals levels actually move in a certain shaped. In addition, these electrons do not take about a fixed space rather because of their speed often only a percentage of them can be found in the orbital paths at one time. This set of data tells is the element is radioactive, natural, metal, nonmetal, and metalloid. If one of these qualifications is true of the element then that layer of the flashcard has “yes” otherwise it just have “None”. The type tells us whether the element is a metal, nonmetal, metalloids or a noble gas. The atomic radius has to do with the size of the atom from the radius to its outer most edge. This will measure the greatest distance at which electrons are most likely to be found, because as mentioned previously they do not have fixed positions. The atomic radius is measured in picometers (pm). The first ionization has to do with measuring the amount of energy that is required to move an electron from an atom is its gas phase. First ionization is measured in kilojoules per mole (kJ/mol). Isotopes where mentioned and explained above when talking about atomic mass. An atom often has different isotopes which have the same number of protons and is there for the same element and has the same atomic number but different isotopes have different number of neutrons. The discoverer and year tell more about the atoms discovery. The number of shells is the same as the energy levels of an atom. Each shell has a certain number of electrons that it can hold. For example, n=1 can only hold 2 electrons. n=2 can hold up to 8 electrons. The number of electrons that a shell can hold can be calculated using the formula 2n^2 where n is the shell number. Finally, the valence electrons has to do with the number of electrons that are found on the outer shell of an atom. All the elements in the first group, which are the columns, have only one valance electron. While group 2 has two valence electrons. Groups 3 through 12 are the metalloids so their valence electrons are not as stable. Group 13 through 18’s valence electrons can be counted by their single digit number. For example group 13 has 3 valence electrons, group 14 have four valence electrons and so on. https://gist.github.com/GoodmanSciences/c2dd862cd38f21b0ad36b8f96b4bf1ee#file-periodic-table-of-elements-csv
Hard to parse
This web tool when used with the web application program of Mandarin makes the assumption that the use already obtains a basic understanding of Chinese. This platform is not made to teaching the beginning levels of a language to anyone, though people may use the web tool for that purpose when creating their own flashcards. The Flashcard There are set up with four layers. There is the english meaning, the pinyin and tone, the simplified character and the traditional character. The English meaning contains one or two meanings that the word translates to. The pinyin and tone which contains the pronunciation and the the tone of the word. The simplified character shows how the character is written and the traditional category shows how the traditional character is written. When the simplified character and the traditional character are written the same the traditional character layer will simple state “Same as simplified character”. What this prints can easily be changed at a data prep level. For example, it could simply have “None” or display the character even if it is the same as the simplified character. The Data The data for the flashcards has come from Hanyu Shuiping Kaoshi, otherwise known as HSK. HSK is a test for non-native Mandarin speakers in order to test their Mandarin Chinese knowledge. The cards created have been divided based on HSK levels 1-4. The cards sets are organized in American alphabetical order based on the Pinyin Pronunciation. Although there are six levels of HSK only four have been prepared as sets. The data from these sets comes from a GitHub project called Inkstone. https://github.com/skishore/inkstone
This feels out of place.
[16], [10], [18], [39], [38]
order
introduced in the introduction
rephrase
[7], [10], [9], [18], [25], [38].
order
[28], [8]
order
There are many web tools present that aid people with learning through flashcards and specifically in language learning. And of course as more and more web tools become available online some have aided people in their efforts to learn and some that have actually negatively impacted people’s learning efforts [8], [22]. There are many web tools and techniques talked about for learning and teaching information in a class room along with self-motivated study sessions and web applications. The web tools being considered are largely web tools to study information generally and language and then language specifically [33]. These all fall under the category of self-motivated learning web tools. For language learning generally, flashcards applications will be considered [4]. The flashcards applications that will be discussed are Quizlet, Anki, and Anki App. The language learning platforms that will be considered are Duolingo, and three Mandarin specific tools: Skritter, Pleco, and HelloChinese.
Needs to be expanded to provide an overview of what your project is bringing to the state of art.
“it is the wearing smooth of neural pathways in the brain”
When quoting directly, enter a page number
Humanism during the Renaissance period stood for the idea that humanity was a divine being capable of achieving remarkable things. However, humanism is close minded in the fact that the ideal form of humanity is the white male figure. Moreover, anything other than the ideal form is automatically considered to be less than human. This is where posthumanism responded and aims to reevaluate humanity through alternative lenses and frameworks of experience. Technology has often been a way to explore these ideas of posthumanism in a way that is open minded
A citation? Narrow the time period.
A valid response consists of a generation that does not include mor than one colon and ends with a form of punctuation
Need to explain any code you put in
diagram
A little hard to see text in the diagram
Resulting in longer times to run and create the model.
Record time and output that result
I only chose a handful of them that are based on the readings through this paper that popped up consistently as models to use for predicting sports.
Cite
The coding language that I am using for this project is Python. It is the most versatile language because there are many libraries that are easily accessible and it is a very simple language.
Mention earlier
Python does need to be installed on to a computer in order to run files in a terminal.
Not true on Windows
This website has a main home page from which is connected to the different study sets. In side each study set the user then has access to flashcard sets. As you can see in figure NUMBER. The home page contains links to Mandarin flashcards, Arabic flashcards, a Chemistry Periodic table set, a Biology set and an anatomy set. Each of these then link to the specific study sets. Each of the study sets has their own page and as with the Chinese study set, there is also a tab to learn more about the culture. Though, it is from the home page that memorization tips and a form to contact the creater exist.
Overall, need to connect tools to each page when you talk about them down below.
One common goal is Computer Science is to decrease the work of a person or a program. This means that in any file the goal of writing functions is to not write the same code over and over again. It is a similar idea here. As a website all the files are formatted the same to it would have been the same code in every file rather than one file with all the code that each individual file calls. It is a similar idea with the JS files, though JS is a little different because different HTML files connect to different JS files rather than all HTML files connecting to the same JS file like it is with CSS. The reason that it worked best to have JS in separate files is because of the amount of code in the JS files when they are needed. It also made the visual aspect of testing and using the differnt files more easily understandable.
Rewrite to make it more clear
<body></body>, <main></main>, <div></div>, <section></section>
Explain the different elements used
was written on VS Code
Don't need to specify the editor or mention what features of this editor were used outside of just writing programs
the the
Spellcheck
HTTP server rather than just clicking to open the files. I used the python command when working with the front end and the php command was utilized when working with PHP on the backend side.
Explain what HTTP server and PHP are
HTML, CSS, and JS. The data in the backend is stored in JSON files
Give a brief description of what these are and their purpose
under the profile [rebekahrudd?] under the project name “Mandarin Study Tools” or “Multilayered flashcards”
cite as link to a repo
The project implements the following graph-based similarity measures: Jaccard Similarity: Measures the overlap between neighbors of two nodes by dividing the intersection size by the union size. It’s effective for sparse networks with binary relationships. Adamic-Adar: A more nuanced similarity that gives higher weights to common neighbors that are less connected, assuming that rare connections carry more significant information. Resource Allocation: Models resource distribution on a network by assessing how “resources” (recommendations) flow from one node to another based on their shared neighbors. Preferential Attachment: This approach predicts new links by assuming that nodes with higher degrees are more likely to form new connections, reflecting a “rich get richer” phenomenon. Common Neighbors: Counts the number of shared neighbors between two nodes. The higher the number, the more likely the nodes are connected. MaxOverlap: A modified Jaccard similarity focusing on maximizing shared nodes between neighbors. Association Strength: Calculates the expected overlap between two nodes, considering the degree of each node and the size of the graph. Cosine Similarity: Computes the cosine of the angle between the degree vectors of two nodes, measuring similarity in a continuous, normalized way. To test the methods, I use similarity coefficients, statistics graphs such as cumulative gain chart, precision recall curve, time chart, ROC curve etc Regardless of the algorithm chosen, evaluation can be conducted using: - Precision-Recall Curves - Mean Reciprocal Rank (MRR) - Normalized Discounted Cumulative Gain (nDCG) - Cumulative Gain Charts - Time-based comparisons for evolving graphs
This does not make sense. Rewrite using cohesive sentences that connect to each other and portray the project goals.
Many of humanity’s fears of technology come from a fear of replacement. The fear that generative artificial intelligence will replace artists, writers, videographers, and more. Even more people are afraid of the physical replacement with robots. Media outlets at one point or another have reported robots being used to replace workers in manufacturing, the food industry, and even for social work.
Need sources for these statements, especially media outlets reports.
Many are concerned with how generative artificial intelligence is gaining a presence not only online but is making its way into the formal art world.
Need to a citation for this statement
This was the major discussion among most articles that have to do with learning Chinese as a second language. These articles referenced different technologies Chinese learners learners use, recommend, and were working on developing.
This needs to be moved closer to these articles. By this point, a reader does not know which articles you are referring to here.
A user will use this tool by logging on to the website and creating an account if they are first time users. If they have used the tool before they will just log in in order to see their progress. A user will see all the sets given to them and all the sets they have created. In order to start a study session a user will select the set that they wish to study. Once they select the set they will then see all the cards in that set loaded onto the screen with their mastery level of each card visible based on the color boarder surrounding the card. At the bottom of the screen will then be an option for the user to select “start” once clicking this button it will bring the user to a page asking what they would like to study in their session. They will then select which category they want on the front of their card and which they would like on the front of the card. They can select multiple options for either the front or the back. They will next select the “start” button and the study session will start. Users will also be able to create their own sets and add the different layers that they would like to focus on. If a card does’t have a definition or term in one layer of the flashcard it will just display “None” for that section. Though this tool will have premade sets for the Chinese languages HSK sets users can apply it to learn many different subjects.
Needs to be rewritten in the present tense.