DeepSeek

4 posts

Deploy Ollama DeepSeek + RAGFlow Locally

Here's how to deploy Ollama DeepSeek and RAGFlow locally to build your own RAG system.

Deploying Ollama DeepSeek and RAGFlow locally allows you to run powerful natural language processing (NLP) models in your own environment, enabling more efficient data processing and knowledge retrieval. Let’s get started. 1. Environment Preparation First, ensure your local machine meets the following requirements: Operating System: Linux or macOS (Windows is also supported via WSL) Python Version: 3.8 or higher GPU Support (optional): CUDA and cuDNN (for accelerating deep learning models) 2.

DeepSeek Prompt Tips

Learn how to use DeepSeek to its full potential with these useful prompts to get the best out of it.

Most people are using DeepSeek wrong. After burning the midnight oil testing this thing (and drinking enough coffee to power a small nation), I’ve cracked the code. Forget everything you know about ChatGPT – this is a whole different beast. 1. The Biggest Secret: Ditch the Prompt Templates Don’t use rigid “professional prompt formulas.” DeepSeek thrives on context and purpose, not step-by-step instructions. Take it as a clever intern who needs clear goals, not micromanagement.

bolt.diy + deepseek = Free yet Better

Build a free dev environment with bolt.diy and DeepSeek.

bolt.diy is a browser-based full-stack development environment (the open-source community edition of bolt.new) that supports collaborative frontend and backend development, allowing users to run Node.js servers and interact deeply with APIs. Its key advantages include multimodal LLM support (e.g., seamless DeepSeek integration), real-time in-browser coding/debugging, visual deployment pipelines, one-click publishing to major cloud platforms like Vercel/Netlify. By combining with DeepSeek, developers gain free access to intelligent code completion and API debugging capabilities.

DeepSeek-R1's Innovation

Learn about the innovative features of DeepSeek-R1.

Innovation 1: Chain of Thought Self-Evaluation DeepSeek-R1 introduces a technique called “Chain of Thought (CoT),” which allows the model to explain its reasoning step-by-step. For example, when solving a math problem, it breaks down its thought process into clear steps. If an error occurs, it can be traced back to a specific step, enabling targeted improvements. This self-reflection mechanism not only enhances the model’s logical consistency but also significantly improves accuracy in complex tasks.