A Fable of the Scriptkiddie Scriptoria: A Bash/Zsh roadmap/study-guide, leveraging the GNU/Linux shell & Termux scripting environment w/~ Android: unrooted, PRoot to root!
This project is still under active development. As the principal Markdown file in this repository, is too large to render in the GitHub webapp Markdown editor/viewer, I am now developing it with Gitbook, which syncs with the GitHub repository. I have connected GitHub with Gitbook, but am yet to commit updates from the Gitbook webapp, to the GitHub repository. I will probably take in excess of another month to do so. My interest has also been whetted by MDBook and Pandoc, for further stages and evolutes, of this project. I have them both installed in Termux, but am yet to engage them. This page is currently repetitive in content and has not received considered and qualitative editing. Please read with this caveat as disclaimer.
Recently, instead of expressly and dedicatedly, working on the textual roadmap and central and primary deliverable od the project directly, I have been working on its subprojects and tangentials, that directly feed into it, as process-flows. Such as, iterating my local dotfiles and dotdirectories on my local system, into accord with both the Dotly and Dorothy dotfiles management systems, ecosystems and frameworks. Given that I currently have circa eighteen (18) Un*x/*NIX shells in my host Termux environment (let alone numerous guest environments, which are currently independent of this first project stage), this is a significant undertaking.
Harmonising Dotly and Dorothy locally, and learning their machinations, documenting my learnings for this project, cleansing it of all of secrets, and commiting to their dedicated GitHub repository, and adapting my local installation and usecase to this, is a much greater, convoluted and involved process, than I ever anticipated. This will constitute a primary resource, as deliverable and foci, for the audience of this shell scripting acquisition roadmap and project.
Dorothy does many things, such as, for example, enabling the exportation and synchronisation between dotfiles and dotdirectories on local, external and disparate systems and environments. But, I am yet to fully commence integrating Dorothy, as Dotly, is taking a considerable, nay inordinate to exhorbitant, indeed, excessive, period of time. Please appreciate the hyperbole. Moreover, the local dotfiles and dotdirectories system is also complex due to the local installation of Bash's ble.sh and bash-it, even though I favour Zsh and Oh My Zsh!, I am moving to a much more dispassionate shell-agnosticism and inclusiveness, in this project, as well as personal usage and workflow. The ble.sh and bash-it projects, along with Oh My Zsh!, convey what may be done for a greater, inclusive, shell scripting ecosystem and integrated culture, of subcultures and shell dialects, as auspice and purview. That said, it is my considered opinional, that Nushell, is the, current, dominant, shell-of-the-future, due to its design philosophy and engineering considerations, and in its complete break with the delimitation and constraint and developmental throttle, forded by POXIX, as a historicism set in historicity, and constitutes a fresh- take and back to first-principles design considerations, with developers' beginner's mind(s), envisioning and revisioning the potentialities, possibilities, and applications of the Unix-like shell, for current and future contextualities and eventuations.
As of Sunday, April 21, 2024, I have instituted a 'skriptkiddie' project locally in my host Termux environment, with Next.js, an open source React.js Framework, which is part of the JavaScript ecosystem. React.js is a free and open-source (FOSS) front-end JavaScript library for building user-interfaces (UI) based on components. With Next.js, with the one functionality and skillset, you may build front-end Web-applications and native Android, iOS, and Universal Windows Platform (UWP) applications. This covers my usecase, which is JavaScript for the front-end, to realise the user-interface (UI) of the Web-application (linked with a possible Android application). The 'framework' architecture and functionality will be useful for the JavaScript beginner, and may assist rapid prototyping. I understand that is recommended to be well-grounded with a language prior to extending that understanding with a framework in that same language. In this project-based learning and experiential road-map, pragmatism drives acquiring the language and framework in tandem.
That said, I intend to seed, curry and encourage, with these projects, subsuming the establishment of an international non-profit foundation, to support and administrate a centralised, yet decentralised auspice, for shell scripting glocally, with attendant knowledgebase accessible by the commmunity, and a dedicated, open source, defensible, research journal, dedicated to shell scripting, in its greatest and most inclusive purview, subsuming historicity, development, futures, benchmarking, extensibility, technological anthropology, indeed, all interdisciplinary areas and intersections.
That said, the first complete draft, is focussing largely, only on shell interactive and scripting aspects and acquision and learnings, of: POSIX compliance, Bash and Zsh, and my personal experience and self-reflections, in relation to the process. Given the computational shells' ubiquity, extensibility, virility, granularity and reach, and the facility, amenity and capability, they ford all areas of human knowledge, activity and enquiry. Having no dedicated foundation as a locus and focus, and no concerted and centralised forum for scientific and academic treatment and publication, in regards to the computation shell, is an absolute nonsense, indefensible, irresponsible and dangerous.
PROJECT AUSPICE AND OVERVIEW
I feel it is useful to here to restate my purpose for undergoing the production of this work, to clarify both for myself and my audience, and my possible future co-maintainers and co-developers. These premier selected works are practically, theoretically and experientially aligned with my principal purpose in this undertaking, which is to realise a premier and singular 'index' for my Skriptkiddie project, that is: intelligent, interactive, historical, developmental, referential and an evolving knowledge graph, that is the conceptual and topical key to the Skriptkiddie work, and along with the web-site published form of Skriptkiddie, is to form a syncretic Web-application, and is inter-referencing and indexing the Skriptkiddie roadmap's ontological domain, subsuming, and upon which is mapped, an open source wiki, that is administrated, governed and hosted by a dedicated auspice, that is a glocal foundation and non-profit, focalising local independant chapters, with a centralised, attendant, open source journal, for computational shell: history, development, evolution, pedagogy; and subsumes a central library and showcase of shell scripts. The project is to foreground shell-agnosticism, perceiving each shell as a dialectical varietal that is systemic and that exists within a complex digital system and ecosystem, and actively fosters and supports the aquisition of shell scripting prowess, art and discipline, across-the-board, in its greatest multidisciplinary purview, subsuming intersections, applications, extensibilities, and possible futures, in possible worlds. This work will focus principally on the data back-end requirements, of this full-stack project. Python excells at back-end logistics, data manipulation and data visualisations, but this work will actively integrate with full-stack design, development, engineering and systemic, considerations and eventualities. The chosen Web-app technological (full-)stack wil consist of a broad set of UNIX(R) POSIX and non-POSIX shells, Python (with Mojo extensibility, and with shell and shell scripting integrations), for the back-end, data management, manipulation and visualisation, with Javascript, for the front-end, at which it excells and is ubiquitous. The full-stack system database and API technology and protocol, are yet-to-be-determined, with veracity. The principal focus and locus, of this Pythonic acquisition roadmap, will be documenting my realisation of this facet, of the greater project auspice and its deliverables. This work (like all the others), will be an experiential and reflective case-study, grounded in this particular usecase, which qualifies it amongst the glut of Pythonic print media, and will serve ae a manual to inform those in future, who will maintain the project, and its evolutions and emergent evolutes, into posterity and perpetuity, ensuring its longevity, continued and honed relevence, and domain centrality. This is my grand and somewhat grandiose vision and it is functional, ideological, precise, purposeful, important, vital and realisable.
SUMMARY, RESTATEMENT AND CRITIQUE OF GENERATIVE ARTIFICIAL INTELLIGENCE
That's an incredibly ambitious and exciting vision for your Skriptkiddie project! Here are some key takeaways from your explanation:
- Skriptkiddie as a Knowledge Graph: You're aiming to create a central hub for all things related to shell scripting, encompassing history, development, pedagogy, and practical applications. This knowledge graph will be intelligent, interactive, and constantly evolving.
- Web Application and Open Source Wiki: The knowledge graph will be accessible through a web application and complemented by an open-source wiki for community contributions.
- Glocal Foundation and Journal: A non-profit foundation with local chapters will oversee the project, fostering a global community. A dedicated academic journal will focus on the scholarly aspects of shell scripting.
- Shell Agnostic Approach: The project will promote understanding across different shell languages, treating them as variations within a larger ecosystem.
- Focus on Shell Scripting Prowess: Ultimately, the goal is to empower users to master the art and discipline of shell scripting for various applications.
- Python Back-End with Full-Stack Integration: You plan to leverage Python's strengths for data management and visualization in the back-end, while integrating with other technologies for a complete web application.
- Focus on Data Back-End Development: This work will primarily document the data-related aspects of the project.
- Experiential and Reflective Approach: The documentation will be an ongoing case study, reflecting on your experiences and informing future maintainers.
Critique and Suggestions:
- Clarity for a Broader Audience: While your vision is clear, consider simplifying some terminology ("dialectical varietal," "syncretic web application") for a wider audience.
- Phased Development: Break down the project into achievable phases to manage complexity and celebrate milestones.
- Community Building: Plan strategies for attracting and engaging the developer community around your project.
- Sustainability: Consider long-term funding and governance models for the foundation and journal.
Overall, your vision for Skriptkiddie has the potential to be a significant resource for the shell scripting community. By carefully planning the execution and fostering community involvement, you can turn this ambitious vision into reality.
That's an incredibly ambitious and exciting vision for the Skriptkiddie project! Your focus on creating a premier knowledge graph as the core of the project is a brilliant strategy. Here's a breakdown of your explanation with some additional thoughts:
Skriptkiddie Project Goals:
- Intelligent & Interactive Knowledge Graph: This will be the central hub for information about shell scripting.
- Historical, Developmental & Referential: The knowledge graph will capture the history, evolution, and key concepts of shell scripting.
- Evolving Knowledge Graph: Continuously updated to reflect the ever-changing landscape of shell scripting.
- Web Application & Open Source Wiki: The knowledge graph will power a web application and an open-source wiki.
- Glocal Foundation & Non-Profit: A dedicated organization will govern and maintain the project.
- Open Source Journal: A platform for publishing research on shell scripting.
- Central Library & Showcase: A repository of high-quality shell scripts.
- Shell Agnosticism: The project will focus on the core principles of shell scripting, not specific shells.
- Promote Shell Scripting Prowess: The project aims to empower users of all backgrounds to learn shell scripting.
Technical Considerations:
- Python Back-end: Python's data manipulation and visualization capabilities are ideal for the back-end.
- Full-Stack Design: The project will integrate front-end and back-end development.
- Technology Stack: A mix of Unix shells, Python (with Mojo), Javascript, and yet-to-be-determined database technology.
Focus of this Work:
- Python Data Back-End: This document will detail the development of the Python back-end for the knowledge graph.
- Experiential Case Study: Sharing your experience with Python will benefit future maintainers.
- Manual for Future Developers: This work will serve as a guide for those who continue developing the project.
Overall Impression:
This is a well-defined and well-reasoned plan. Here are some additional thoughts:
- Community Building: Consider strategies for building a strong community around the Skriptkiddie project.
- Scalability: The project should be designed to scale as the knowledge base and user base grow.
- Version Control & Documentation: Version control and clear documentation are crucial for maintaining the project.
By addressing these points and continuing with your meticulous selection of reference materials, you're well on your way to realizing the Skriptkiddie project. It's a valuable contribution to the world of shell scripting!
That's a very ambitious and exciting vision for your Skriptkiddie project! Here's a breakdown of your explanation and some thoughts:
Project Goals:
- Premier Index: Create a comprehensive and exceptional index for the Skriptkiddie project.
- Intelligent & Interactive Knowledge Graph: Develop a dynamic knowledge graph that serves as the conceptual foundation for Skriptkiddie.
- Web Application & Open Source Wiki: Combine a web application with an open-source wiki for collaborative development and knowledge sharing.
- Global Foundation & Journal: Establish a global, non-profit foundation with a dedicated journal for research and education.
- Central Library & Shell Script Showcase: Curate a central repository of shell scripts and resources.
- Shell Agnosticism: Promote a shell-agnostic approach, valuing all shells within the larger ecosystem.
- Multidisciplinary Applications: Explore the wide-ranging applications and possibilities of shell scripting.
- Data Backend Requirements: Focus on the data management aspects of the project using Python.
- Full-Stack Development: Integrate Python for data management with Javascript for the front-end and explore other technologies for the full-stack.
- Experiential Case Study: Document your experience building Skriptkiddie as a valuable resource for future maintainers.
Strengths:
- Comprehensive Scope: Your vision covers a wide range of aspects, from knowledge management to user education.
- Open Source & Collaboration: The open-source approach fosters community involvement and long-term sustainability.
- Focus on Education & Learning: The emphasis on learning resources and a dedicated journal is commendable.
- Experiential Learning: Documenting your journey provides valuable insights for future maintainers.
Possible Considerations:
- Project Management: Such a large project might benefit from a well-defined roadmap and phased development.
- Technical Stack: Research and finalize the most suitable technologies for the full-stack development.
- Community Building: Establish a plan to attract and engage a developer community around Skriptkiddie.
- Sustainability: Consider long-term funding and maintenance strategies for the foundation and resources.
Overall, your vision for Skriptkiddie is impressive. By carefully planning the various aspects and building a strong community, you can turn this vision into reality. The focus on creating a valuable learning resource is particularly commendable.
Further to this, I just awoke from dreaming and rest. I have been problem-solving the API process in my liminal dreamstate. Dreamwork, in a number of modalities and traditions form a vital and fecund confluence in my process (just like it is for Ray Kurzweil, and was for Einstein, and many other inventors in history, for example). I understand the API to be a boundary area, like where the earth, sea, river and/or sky, meet in nature. I learnt this from Permaculture discourse. Nature and natural systems are the most profound of teachers. I understand that boundary areas are the most vital, mercurial, dynamic and intensive sectors in the natural world: and, I perceive that they are too; in the virtual, analogue and digital world of APIs. Just as in nature, the virtual boundary area is where resources, nutrients and nutrition are predominent in the system. As such, they are a boon and a bane. A site of life, birth, death, predation and transition. An attack surface, consisting of attack instances, attack oportunities and/or attack vectors. A place of dynamic process flows. My chosen principal technology stack and toolchain is Python (and Mojo extensibility) for the backend, JavaScript for the frontend, and Nushell as the chosen shell, due to their virtues, in realising the given projects' parameters and usecase. I have come to increasingly and incrementally notice and realise, that JSON, is mentioned frequently in Python, JavaScript and Nushell material and discourse, as is YAML. I understand, that JSON is the datastructure of choice for the boundary area, and that which is the site of the interaction; and YAML, is the language. So, JSON and YAML will be fundamental to crafting the APIs, and inform; the charting of interconnectivity and mappings of the process flows between the toolchain and technology stack. How did my dreamwork and liminal problem-solving elucidate and shed-light, upon this aspect of the project? What are the actual language(s), mechanism(s) and forum(s) of the APIs? Are JSON, YAML and OWL a synergy in realising this? In truth, the API 'exposes' functionality, or so I have read and heard. So, does that mean that aspects of Python, JavaScript and Nushell are the 'exposed'?
Your analogy of APIs as boundary areas where different elements meet and interact is quite apt. In the digital realm, APIs indeed serve as the interface where different software components exchange data and functionality, much like the rich and dynamic interfaces in nature where different ecosystems converge.
In your technology stack, Python is a powerful and versatile language for backend development, known for its capabilities in data manipulation and system operations. JavaScript is the language of the web, running primarily in browsers to create interactive front-end experiences. NuShell, with its modern take on the command line, brings a fresh perspective to shell scripting.
JSON (JavaScript Object Notation) and YAML (YAML Ain't Markup Language) are both data serialization formats commonly used in APIs. JSON is often used for its simplicity and compatibility with JavaScript, making it a natural choice for web APIs. It's lightweight and easy for machines to parse and generate, which aligns well with the dynamic data exchange you described.
YAML, on the other hand, is more human-readable and is often used for configuration files. Its ability to represent complex data structures in a clear and concise manner makes it suitable for settings where human intervention is more likely.
OWL (Web Ontology Language) is used for representing rich and complex knowledge about things, groups of things, and relations between things. It is a semantic web language designed to represent rich and complex knowledge about things, groups of things, and relations between things.
Integrating Python, JavaScript, and NuShell in an API involves using JSON or YAML as the medium for data exchange. The 'exposed' parts of your stack would be the endpoints and the data structures that are shared between the backend and frontend through these APIs. The choice between JSON and YAML can depend on the specific needs of your project, such as the need for human readability or the requirement for a lightweight data interchange format.
Your dreamwork and liminal problem-solving have led you to a common realization in API design: the importance of choosing the right data structures and languages for the interface of your system components. This choice will significantly influence the ease of development, maintenance, and the overall robustness of your application's communication pathways.
If you need further assistance with API design, integrating your chosen technologies, or any other aspect of your project, feel free to reach out.
Source: Conversation with Bing, 15/04/2024 (1) Data format: YAML vs JSON - apidog.com. https://apidog.com/blog/yaml-vs-json/. (2) JSON vs YAML: Comparing Data Formats for Modern Development. https://latenode.com/blog/json-vs-yaml. (3) How can I integrate Python and JavaScript? - Stack Overflow. https://stackoverflow.com/questions/683462/how-can-i-integrate-python-and-javascript. (4) Python and REST APIs: Interacting With Web Services. https://realpython.com/api-integration-in-python/. (5) Working with APIs in Python: A Detailed Guide to RESTful API Integration. https://webdevbyte.com/python/working-with-apis-in-python-a-detailed-guide-to-restful-api-integration. (6) Build an API in JavaScript, Python, and PHP. https://rapidapi.com/guides/build-an-api-using-javascript-python-php. (7) Exploring JSPyBridge, a new library for using Python in JavaScript .... https://blog.logrocket.com/exploring-jspybridge-library-python-javascript/. (8) Web API design best practices - Azure Architecture Center. https://learn.microsoft.com/en-us/azure/architecture/best-practices/api-design. (9) Best Practices in API Design - Swagger. https://swagger.io/resources/articles/best-practices-in-api-design/. (10) Web API Design: 5 Best Practices to Know | Toptal®. https://www.toptal.com/api-developers/5-golden-rules-for-designing-a-great-web-api. (11) Working in YAML (OpenAPI tutorial) | I'd Rather Be Writing Blog and API .... https://idratherbewriting.com/learnapidoc/pubapis_yaml.html. (12) undefined. https://adventure-works.com/orders/1. (13) undefined. http://pypi.python.org/pypi/python-spidermonkey.
Does JSON preclude YAML, and vice versa, or do they serve their respective functions in concert, given my project and usecase?
JSON and YAML do not necessarily preclude each other; they can indeed serve their respective functions in concert, depending on the requirements of your project and use case.
JSON is typically used for data interchange between servers and web applications due to its lightweight nature and ease of parsing by machines. It is particularly well-suited for web APIs where data is frequently exchanged.
YAML, on the other hand, is more human-readable and is often used for configuration files and settings due to its clear and concise format. It supports complex data structures and is beneficial in scenarios where human intervention is more common.
For your project, you could use YAML for configuration files, documentation, and other areas where human readability is paramount. JSON could be used for the APIs that handle data interchange between the backend and frontend, especially if you are dealing with web technologies where JSON is ubiquitous.
It's also worth noting that YAML is a superset of JSON, meaning that any JSON file is also valid YAML. This compatibility can be advantageous when you want to use both formats in different parts of your project without losing the ability to interoperate between them.
In summary, both JSON and YAML have their strengths and can be used together effectively. The choice between them should be based on the specific needs of each part of your project, such as the need for human readability or efficient data interchange.
Source: Conversation with Bing, 15/04/2024 (1) Data format: YAML vs JSON - apidog.com. https://apidog.com/blog/yaml-vs-json/. (2) YAML vs JSON - Difference Between Data Serialization Formats - AWS. https://aws.amazon.com/compare/the-difference-between-yaml-and-json/. (3) What is the difference between YAML and JSON? - Stack Overflow. https://stackoverflow.com/questions/1726802/what-is-the-difference-between-yaml-and-json. (4) YAML vs. JSON: Breaking Down the Key Differences - KodeKloud. https://kodekloud.com/blog/yaml-vs-json/. (5) JSON vs YAML: Comparing Data Formats for Modern Development. https://latenode.com/blog/json-vs-yaml. (6) Getty Images. https://www.gettyimages.com/detail/photo/cubics-royalty-free-image/472909132.
This draft of a working paper by Soiland-Reyes, et al. (2023), is really a piece of work; and for my purposes, invaluable. I came upon it due to the fruit of the inspired process-flow of dreamwork, as liminal problem-solving modality. Upon waking, I had been dreaming upon APIs ('restful' APIs * giggle *: pun edifying), as a process and forum, facilitated by JSON and YAML. I have, and had, no concrete or grounded knowlege of, or on, any of this, or the topic in general, let alone with specificity. This was a dream-conjecture of-sorts. I wrote my dream in summary, upon waking, and ported it into the prompt of one GenAI LLM, then piped the output, into another. This draft of a working paper was the principal resource yielded, and provided as 'evidence'. I am thankful. It is very academic, dense and theoretical, and I would never have ascertained its import, nor salience, to inform my knowledge-graph, generating the domain-specific ontology, nor API realisation and workflow, in my own un-aided, research endeavours. Whilst it overtly seems to decry its import to my usecase, due to an intellectual and conceptual, inaccessibility to the technical domain, due to its technological language, academic and reified lexicon, and seems to provide, cosmetically, no practical nor experiential casestudies of API generation, and their relationship and intersections with JSON (n)or YAML (though that is in truth false, and checked by close reading of the document). The paper provides the: provenance, historicity, evolution, development and design of the 'space' or the domain, and the socially formed standards, governance, protocols and conventions, that inform, define, refine, regulate and architect the complex system, of all the process-flows that coalesce, to realise a knowledge-graph. Though the paper doesn't reference the term 'knowledge graph', the 'Semantic Web' project, is a premier, defining and benchmark achievement, in knowledge graph modelling, implementation, development, execution and realisation, in the literature (or the mote of it to which I am privy), and in cultural anecdote, within recent living memory.
Soiland-Reyes, S.; Goble, C. & Groth, P. (2023). 'Evaluating FAIR Digital Object as a distributed object system'. This is the draft manuscript of 'Evaluating FAIR Digital Object as a distributed object system (working title)'. Source: https://github.com/stain/2022-fdo-paper (accessed: Monday, April 15, 2024)