Option 2: use ollama show command to print the Modelfile
You can check the prompt template configuration.
Ollama model
All local models are available at localhost:11434.
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOllama(model="llama3.2:1b")
prompt = ChatPromptTemplate.from_template("Provide a brief explanation of this {topic}")
# Chaining
chain = prompt | llm | StrOutputParser()
response = chain.stream({"topic": "deep learning"})
from langchain_core.messages import AIMessageChunk
# Streaming response from model
for token in response:
if isinstance(token, AIMessageChunk):
print(token.content, end="", flush=True)
elif isinstance(token, str):
print(token, end="", flush=True)
Deep learning is a subfield of machine learning that involves the use of artificial neural networks (ANNs) to analyze and interpret data. ANNs are modeled after the human brain, with layers of interconnected nodes or "neurons" that process and transmit information.
In traditional machine learning, algorithms like linear regression and decision trees are used to solve problems. However, these methods can be inflexible and prone to overfitting, where the model becomes too specialized to the training data and fails to generalize well to new, unseen data.
Deep learning addresses this limitation by using multiple layers of ANNs with different types of nodes (e.g., sigmoid, ReLU, or tanh) that work together to learn complex patterns in the data. The key characteristics of deep learning models include:
1. **Hierarchical structure**: Deep models have multiple layers, each with its own set of nodes and activation functions.
2. **Non-linearity**: Deep networks use non-linear activation functions, such as ReLU or tanh, which introduce non-linearity into the model.
3. **Regularization**: Regularization techniques, like dropout or L1/L2 regularization, are used to prevent overfitting by randomly dropping out nodes or adding a penalty term to the loss function.
Deep learning has been widely adopted for tasks such as image recognition, natural language processing, speech recognition, and predictive modeling. It's particularly useful when dealing with high-dimensional data, complex relationships between variables, or noisy data.
Streaming response is possible through the single chain created above.
astream() : asynchronous streaming
async for chunks in chain.astream({"topic": "Google"}):
print(chunks, end="", flush=True)
You're referring to Google. Google is an American multinational technology company that specializes in internet-related services and products. It was founded on September 4, 1998, by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University.
Google's primary mission is to organize the world's information and make it universally accessible and useful. The company's search engine, which is now one of the most widely used search engines in the world, was initially designed to provide hyperlinked acronyms (or "knolinks") for Web pages, but over time has evolved into a full-fledged search engine that can index and retrieve information from the entire web.
Google's other notable products and services include:
* Gmail: an email service
* Google Maps: a mapping and navigation service
* Google Drive: a cloud storage service
* Google Docs: a word processing and document editing service
* YouTube: a video-sharing platform
* Chrome: a web browser
Google is known for its innovative and user-friendly products, as well as its commitment to improving the online experience. It has become one of the most valuable companies in the world, with a market capitalization of over $1 trillion.
Output format: JSON
Use the latest version of Ollama and specify the format of the output to json.
Local models must be downloaded before they can be used.
Call the chain.invoke method to pass an image and text query and generate an answer.
ChatOllama : Uses a multimodal LLM, such as llava.
StrOutputParser : parse the output of LLM into a string
chain : pipeline prompt_func, llm, and StrOutputParser
from langchain_core.output_parsers import StrOutputParser
from langchain_ollama import ChatOllama
llm = ChatOllama(model="llava:7b", temperature=0)
chain = prompt_func | llm | StrOutputParser()
query_chain = chain.invoke(
{"text": "Describe a picture in bullet points", "image": image_b64}
)
print(query_chain)
- The image shows a picturesque tropical beach scene.
- In the foreground, there is a rocky shore with clear blue water and white foam from waves breaking on the rocks.
- A small island or landmass is visible in the background, surrounded by the ocean.
- The sky is clear and blue, suggesting good weather conditions.
- There are no people visible in the image.
- The overall style of the image is a natural landscape photograph with vibrant colors and clear details.