nexus, index source, nexus main
The survey results proceeded to the relationship between CL effectiveness (perceived) in productivity in relation to facets of skill on the part of the Researcher-Developer. A summary point: People with experience and skill in software development experience a lower “productivity boost” from using a CL, even to the point of decrease.
Turning to the scientific literature produced as a result of CL collaboration. An interesting resource is the Retraction Watch database. In one year there are currently O(10k) retractions; in comparison with 3 million published papers per year.
In summary the narrative suggests the following failure modes:
A Cautionary Tale deserving of attention and effort: As scientists credibility is an important part of how we operate (‘philosophy of doubt’). Where to begin? The speaker suggests as an example taking a grass roots approach: “Buddy up” with an RSE.
Questions
This page touches on the broad definition of artificial intelligence as forms of autonomous computation. This big region on the AI Euler diagram encompasses all manners and forms of AI: Machine Learning, Deep Learning, Convolutional Neural Networks, Supervised and Unsupervised forms of training, clustering algorithms such as K-means, spectral graph theory, adversarial networks and that’s just getting started.
The related sub-pages linked above – gcp, aws and azure – are AI notes on cloud-specific technology stacks.
Suggestion: Build a prompt using the “PACE” guideline acronym: State Problem, Action, Context, Example.
The central idea in generative AI is the transformer; and the central model hub is Hugging Face.
Let’s run through a HuggingFace use example subject to some constraints and see how the Python
library transformers comes into play. In particular we want to learn how to operate an instance of
a pipeline which is imported from transformers. This is queryable (analogous to requests)
using default models through an API; so the minimum viable code is down around 2, 3, 4 or so lines.
The source video for this eigenconcept is here on YouTube.
Update the Advanced Package Tool apt library registry; and permit it to make upgrades of
installed libraries:
sudo apt update
sudo apt upgrade
Ensure that python, pip and conda are installed. Let’s assume pip will do the job:
First install pytorch, then transformers.
pip install torch
pip install transformers
To set up a browser-based interface: Clone the following repository:
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui/
Need to elaborate and set up an environment: MISSING STEP
pip install -r requirements.txt
“Run the application”… ?!?!?!???
./start_linux.sh # On Linux
Example pre-trained model: DistilBERT
from transformers import pipeline
sentiment_pipeline = pipeline("sentiment-analysis")
result = sentiment_pipeline("I love Hugging Face!")
print(result)
More from the AI:
YouTube: Search “Running HuggingFace models on WSL2” or “Using HuggingFace Transformers on Ubuntu” for a visual guide.
DistilBERT: A smaller, faster, and cheaper version of BERT. It is well-suited for tasks like text classification and sentiment analysis 1 .
BERT: A versatile model for various NLP tasks, including question answering and natural language inference 1 .
RoBERTa: An optimized version of BERT that improves performance on NLP tasks 1 .