- Nov 2024
-
pressbooks.openedmb.ca pressbooks.openedmb.ca
-
ChatGPT while preparing for mock job interview activities, to produce technical questions outside their areas of expertise.
Concerning given AI gaslighting, and operating outside their area of expertise.
-
Due to bias in AI tools’ training data, AI output need to be carefully reviewed to ensure that attempts to diversify content don’t perpetuate offensive stereotypes.
Give an example But acknowledge that the researchers have fair biases. For example, a lot of image generation (SD) is trained on asian women and anime. This causes inherent bias in that men frequently appear missing parts or having body proportions,
Try getting AI image generator to create "an overweight 40yo male wearing a pinstripe suit, sitting at a computer"...
When I did this, I consistently got pictures of unhappy white men, sitting in an office.
Something you have missed is idea of how a LLM stores and represents data and why that may cause bias. For example, Flux has difficulty generating atomically correct images of people where SDXL and Pony models do not.
-
This could include asking an AI tool to provide examples from varied perspectives, such as different organizational roles, generations, or backgrounds,[9] or to provide counterpoints
This would be interesting to see in a side by side example.
-
and can be prompted to check or adapt a lesson plan according to a chosen framework or set of principles, such as Universal Design for Learning.
This is tangential and distracting. The UDL page is connected to the previous content. Show me what you mean or leave it out.<br /> https://www.perplexity.ai/search/imagine-you-are-a-computer-net-ZFXb6tbCSBeaf44GWZSnfw Recast the previous information on classful and classless IP subnetting in a UDL format. (Follow link for response)
-
custom instructions could save time.
Felt like a circular link. I expected to get information on grounding an AI or a wildcard prompt I could apply to multiple activities. Instead it looped back to "drafting lesson plans"
-
When provided with information about what has already been covered in the course, AI tools can remind instructors to integrate review of previously taught concepts into lesson plans and make recommendations about how to draw connections between different course topics.
This is something I would like to do.<br /> So what are the steps. 1) How do I ground the AI? 2) How do I isolate the grounding material from the material actually delivered? 3) How would an AI recognize that critical content had been missed.
-
One RRC Polytech instructor found that it improved their scaffolding by ensuring that they didn’t overlook important foundational skills that students would need for completing a task
Interesting, now back it up. (This runs counter to my experience - I want to know more) Step me through what that might look like and why the AI might be better.
-
For more about reprompting, custom instructions, and adapting AI outputs, please see the Writing and Refining Prompts chapter.
Link is valid and useful. But it killed my flow in reading and understanding the chapter.<br /> I think this paragraph would benefit from a walkthrough of the step by step process to refine prompts.
-
One RRC Polytech communication instructor, who used ChatGPT to generate a first-draft lesson plan about asking for clarification in a culinary workplace, found that the tool was useful for generating ideas and structure, but significant reworking was needed to make it useable for class
This experience is valid - and a strong motivator for why I am reading this book. How can I use prompt engineering to get what I need quickly?
-
Drafting Lesson Plans
I have been using perplexity to generate step by step instructions for students. It actually does this reasonably well. this lets me generate supporting documentation for labs/activities.
1) Brainstorm what I want the students to do. 2) Ask for step by step instructions 3) Challenge AI to explain each step in depth. 4) Copy paste into my own document. 5) Validate information is correct (assume gaslighting) 6) Add appropriate screen captures.
The key difference here is that I am trying to use AI to generate factually accurate information. The challenge is that much of the useful information is behind paywalls, or being presented by people who are not experts (youtube). Finding easy to understand credible sources is a challenge.
//------------------------------------ Based on your example: here is the version I got....
Prompt: "Imagine you are a computer networking instructor teaching IP addressing. Prepare a lecture on IPV4 classed and classless addressing." https://www.perplexity.ai/search/imagine-you-are-a-computer-net-ZFXb6tbCSBeaf44GWZSnfw
Good morning, class. Today we'll be discussing IPv4 addressing, specifically focusing on classed and classless addressing schemes. This is a fundamental concept in networking that will help you understand how IP addresses are allocated and managed.
IPv4 Address Classes
IPv4 addresses are 32-bit numbers typically represented in dotted-decimal notation. Historically, these addresses were divided into five classes: A, B, C, D, and E.
Class A, B, and C Addresses
These classes were primarily used for network hosts:
| Class | First Octet Range | Default Subnet Mask | Networks | Hosts per Network | |-------|-------------------|---------------------|----------|-------------------| | A | 1-126 | 255.0.0.0 (/8) | 126 | 16,777,214 | | B | 128-191 | 255.255.0.0 (/16) | 16,384 | 65,534 | | C | 192-223 | 255.255.255.0 (/24) | 2,097,150| 254 |
Class A addresses were designed for extremely large networks, using the first octet for the network ID and the remaining three for host addresses[1][3].
Class B addresses were intended for medium to large networks, using the first two octets for the network ID and the last two for host addresses[1][3].
Class C addresses were used for small local area networks (LANs), using the first three octets for the network ID and only the last octet for host addresses[1][3].
Class D and E Addresses
Class D (224.0.0.0 to 239.255.255.255) is reserved for multicast groups[1][3].
Class E (240.0.0.0 to 255.255.255.255) is reserved for experimental purposes[1][3].
Limitations of Classful Addressing
While classful addressing was initially useful, it had several drawbacks:
- Inefficient allocation: Many organizations were assigned more addresses than needed, leading to address waste[2].
- Limited flexibility: The fixed subnet masks didn't allow for efficient subnetting[2].
- Rapid IPv4 exhaustion: The rigid class structure accelerated the depletion of available IPv4 addresses[2].
Classless Addressing
To address these limitations, classless addressing was introduced in 1993 through Classless Inter-Domain Routing (CIDR)[2].
Key Features of Classless Addressing:
-
Variable Length Subnet Masking (VLSM): This allows network administrators to use custom subnet masks, enabling more efficient use of IP address space[2].
-
More efficient routing: CIDR allows for route aggregation, reducing the size of routing tables[2].
-
Flexible network sizes: Networks can be sized appropriately for specific needs, rather than being constrained to predefined classes[2].
CIDR Notation
CIDR uses a forward slash notation to specify the number of network bits. For example:
- 192.168.1.0/24 indicates a network with 24 network bits and 8 host bits.
- 10.0.0.0/8 represents a large network with 8 network bits and 24 host bits.
Practical Implications
-
Address conservation: Classless addressing has significantly extended the lifespan of IPv4 by allowing more efficient allocation[2].
-
Improved network design: Network architects can create subnets of various sizes to match specific requirements[2].
-
Enhanced routing efficiency: CIDR enables route summarization, reducing the load on internet routers[2].
Conclusion
While classful addressing played a crucial role in the early days of the internet, classless addressing has become the standard due to its flexibility and efficiency. Understanding both systems is essential for network professionals, as you may encounter legacy systems using classful addressing, while most modern networks employ classless addressing techniques.
In our next lecture, we'll delve deeper into subnetting and VLSM calculations. Any questions before we wrap up?
Citations: [1] https://www.techtarget.com/whatis/definition/IPv4-address-class [2] https://www.auvik.com/franklyit/blog/classful-classless-addressing/ [3] https://www.meridianoutpost.com/resources/articles/IP-classes.php [4] https://www.uobabylon.edu.iq/eprints/publication_3_2673_1450.pdf [5] https://www.hostwinds.com/blog/ipv4-classes-what-are-they-and-how-are-they-used [6] https://www.tutorialspoint.com/ipv4/ipv4_address_classes.htm [7] https://www.geeksforgeeks.org/introduction-of-classful-ip-addressing/ [8] https://en.wikipedia.org/wiki/Classful_network
//-------------------------- Observations/practice: While reasonably correct, it lacks depth and context required for teaching. It is a snippet. I need to merge several prompts together to get reasonable coverage of the material. - See link Plus, the answer could easily be wrong, so I need a way to verify the accuracy of AI generated information. (Note since much of it comes from YouTube - the source material may be bad)
-
-
pressbooks.openedmb.ca pressbooks.openedmb.ca
-
Course Planning
This is the section I am most interested in - I would like to see examples of how to make my teaching prep easier.
Feels light. For a book on AI in education, I would expect this to be the heart.
What I haven't seen so far is a guide on how to ground AI in your source material.
-
-
pressbooks.openedmb.ca pressbooks.openedmb.ca
-
while Adobe Firefly converts a
google "adobe dumpster fire" People are concerned about how Adobe is using their professional work without consent.
-
Image-Producing AI Tools
I am also concerned about the lack of free resources and locally generated content.
When using Leonardo.ai and others, the interface was such that most of your art became public (free version) and you had not copyright over it. - that was the best case. Switching over to ComfyUI, I've been able to generate art locally, do unlimited iterations and develop skill that I could not with web models.
Adobe comes with a huge price tag, privacy concerns, and being locked out of your work if you stop paying.
/---------------------------------------------------------- For comparison: Look up -> Krita + acly vs adobe or midjourney... https://www.perplexity.ai/search/how-does-krita-acly-compare-to-14E5y_ZSTEScfRwNhqElBA
Krita with the Acly AI diffusion plugin offers some compelling AI image generation capabilities that can be compared to Adobe's Generative Fill and Midjourney, though there are some key differences:
Strengths of Krita + Acly
Automatic Mode Switching Krita's implementation automatically switches between simple generation and inpainting based on whether you've made a selection. This streamlines the workflow compared to manually toggling between modes[1].
Layer Support The plugin adds layer functionality similar to Photoshop, allowing you to stack generations for backgrounds, subjects, etc. This provides more flexibility in composing complex images[1].
Model Compatibility It's compatible with popular AI models like ControlNet and Lora, giving users access to a wide range of generation options[1][2].
Open Source and Free Unlike Adobe's offerings, Krita and the Acly plugin are completely free and open source[4][5].
Limitations
GPU Requirements For optimal performance, especially with live generation, a powerful GPU (RTX 4070 or better) is recommended[2].
Model Size The default models used are typically smaller (around 6GB) compared to Adobe's large proprietary models, which may impact generation quality in some cases[4].
Work in Progress Some features, like inpainting with transparent backgrounds, are still being refined[1].
Comparison to Adobe and Midjourney
-
Ease of Use: Adobe's Generative Fill is known for its user-friendly interface within Photoshop. Krita's implementation is getting close, but may require more tweaking.
-
Output Quality: Adobe and Midjourney likely have an edge in overall output quality due to their larger, more extensively trained models.
-
Integration: Krita offers better integration with traditional digital painting workflows compared to Midjourney, which is primarily focused on text-to-image generation.
-
Cost: Krita + Acly is free, while Adobe and Midjourney require subscriptions.
-
Customization: Krita allows for more customization and use of open-source models, which can be an advantage for advanced users.
While Krita + Acly may not yet match the polish of Adobe's AI tools or the raw generation power of Midjourney, it offers a compelling free alternative that's rapidly improving. For many users, especially those already familiar with Krita, it could be an excellent option for integrating AI generation into their workflow.
Citations: [1] https://civitai.com/articles/4008/is-krita-the-better-tool-for-you [2] https://www.reddit.com/r/StableDiffusion/comments/17ytw1g/krita_is_by_far_the_best_software_for_ai_image/ [3] https://krita-artists.org/t/is-there-a-object-selection-tool-like-in-photoshop/56825 [4] https://www.youtube.com/watch?v=Qv9G5QsvlxQ [5] https://www.youtube.com/watch?v=suwwW9eE5qI [6] https://news.ycombinator.com/item?id=40672281 [7] https://forums.macrumors.com/threads/humans-have-nothing-to-fear-from-adobe.2419287/ [8] https://news.ycombinator.com/item?id=38342670
-
-
Video-Producing AI Tools
This feels dated. AI video tools are rapidly changing. I would recommend including the date for when this was valid.
Today I had a student submit a 5 minute video done completely in AI.<br /> Chat GPT for the script ElevenLabs for text to speech Runway for the video animations.
-
exploration, access a searchable database of over 12,000 AI tools.
link works, Nov 8, 2024
-
AI Tools Task
Are these links vetted? Who tested them, how were they evaluated?
After the 2nd link to a register for our service, I stopped trying. It felt to much like bait and switch.
I've experienced a lot of gaslighting from AI, and these webpages don't build my confidence.
It seems like everyone is trying to dig into your pockets with half developed projects.
-
-
pressbooks.openedmb.ca pressbooks.openedmb.ca
-
To view the full sample user interaction with ChatGPT, please select the image below to expand it.
I jumped here because I was getting frustrated with lots of directions to go explore. When what I wanted was tips to use the tools in a productive way...
This felt valuable. I could see how the prompt was being turned into something that would save me time and effort.<br /> I would have
-