How Notellect Works

From connected data to analysis your team can review and reuse

Notellect aligns business meaning first, gives teams a DuckDB and Jupyter workbench for real investigation, and keeps every claim tied to evidence so reviewed work becomes reusable context.

Workflow concept

Dataset Explorer

Connect source types, define semantic models, and keep facts plus glossary close to the data.

Data Workbench

Run DuckDB and Jupyter analysis, mix Python and SQL, and capture draft claims with support.

Insight Reports

Build report sections from claims and tables without stripping away their support.

Across all surfaces

Semantic + trust layer

  • Semantic models and business terms
  • Evidence attached to each claim
  • Draft vs trusted review states
  • Trust Center reuse across workflows

Semantic layer first

Map raw schema into business models, facts, glossary terms, and example queries so analysis starts from aligned meaning.

DuckDB + Jupyter core

Run fast SQL and Python work in one notebook workflow instead of scattering analysis across query tabs, code notebooks, and slides.

Trust built into the workflow

Generated claims stay draft until reviewed, with support visible and promotion into Trust Center only when the team chooses reuse.

Product Surfaces

Four surfaces that turn analysis into trusted team knowledge

Most AI products stop at the answer box. Notellect covers the actual places analytical teams work: connected data, DuckDB and Jupyter analysis, reports, and reusable trusted context.

Dataset Explorer

Connect databases, files, workbook tables, and remote file paths, then turn raw schema into business-ready models, facts, glossary terms, and example queries.

Data Workbench

Run DuckDB-backed notebook sessions with Jupyter so Python, SQL, outputs, and draft claims stay in one analytical thread.

Insight Reports

Build report sections from claims, tables, and commentary so supported conclusions and provisional ones remain distinguishable.

Trust Center

Promote accepted facts, glossary terms, execution results, and claims so the next cycle starts from reviewed context instead of a blank page.

Data Foundation

Align business meaning before analysis starts

Garbage in, garbage out. Notellect uses a semantic layer plus data knowledge so raw schema, business terms, facts, and example queries are aligned before generation begins.

Data foundation concept
Datasource layerDuckDB core

Connected sources land in a fast DuckDB analytical runtime, so teams can investigate across files, databases, and workbook data without waiting on pipeline work.

PostgreSQL / MySQL
SQLite / workbook tables
CSV / JSON / Parquet / Excel
HTTP / S3 transports
Query once, join across sources, and keep room for the broader DuckDB ecosystem when teams need to go further.
Semantic layer
Raw schema becomes business-ready structure, so analysts work with recognizable models, definitions, and relationships.
Raw column
`gm_pct_qoq_delta`
Mapped meaning
Quarter-over-quarter gross margin change
Data knowledge
Facts, glossary terms, and example queries become reusable knowledge the next analysis can inherit instead of rediscovering.
Datasource and model facts
Glossary terms and shared definitions
Saved example queries from successful work

Garbage in, garbage out. Faster generation does not help if every cycle starts with raw tables and inconsistent business definitions.

Notellect keeps semantic models, facts, glossary, and example queries close to the data so analysts can focus on judgment instead of schema archaeology.

Product concept
Product interface
WorkbenchDuckDB + Jupyter

Investigate the margin drop, keep the supporting steps visible, and promote only what the team wants to reuse.

Claim 1Trusted

Unit economics worsened in the enterprise segment.

Claim 2Working session

Discounting increased in two regions after pricing tests.

Claim 3Draft

Gross margin should normalize next quarter if spend mix resets.

Evidence and meaningTrust visible

Evidence

  • Executed SQL step
  • Notebook output table
  • Pricing glossary term

Trusted context

Gross margin definitiontrusted
Pricing change claimdraft
Evidence treeSupport links visible
Executed step
Supporting fact
Claim
Data Workbench

Powered by DuckDB and Jupyter, built for real analytical work

Notellect combines a fast DuckDB analytical runtime with a Jupyter notebook workbench. Teams can query connected sources, run Python and SQL in one place, and keep outputs close to the claims they support.

Run Python and SQL together in a Jupyter notebook backed by DuckDB.

Start from semantic models, raw tables, or workbook data in the same session.

Work across PostgreSQL, MySQL, SQLite, CSV, Parquet, JSON, Excel, and remote file transports.

Export useful results back into workbook tables and carry claims forward to review.

Trust and Review

What keeps generated analysis reviewable

A generated conclusion is only useful if the next reviewer can still inspect what supports it, what is still draft, and what has already been trusted.

Evidence

Claims can link to executed notebook results, upstream facts, glossary terms, and prior claims instead of standing alone.

Draft by default

Generated claims stay separate from trusted team context until someone reviews and promotes them.

Promotion preview

Before adding work to Trust Center, Notellect can show the dependency tree of claims and execution results that come with it.

Scoped reuse

Trusted context can be kept at workspace, datasource, or model scope so reuse stays intentional.

Trust Center concept
Trust Center19 trusted items
Gross margin glossarytrusted
Orders join declarationtrusted
Pricing change claimtrusted
Board-ready commentaryneeds review
Recent updates
Claim promoted
Now available as trusted context
Glossary updated
Term definition revised
Execution fact linked
Notebook result saved for reuse
Scope
Workspace
Data source
Model
Trust Center

Trust is explicit, reviewable, and reusable

Claims produced in the workbench do not become team knowledge automatically. Reviewers can inspect supporting facts, preview what promotion pulls in, and keep draft and trusted context separate.

Draft and trusted states stay visibly separate.

Claims can carry glossary, fact, execution result, and claim dependencies.

Promotion preview shows what will be added before Trust Center reuse.

Trusted context can be scoped to workspace, datasource, or model.

Request Demo

See the product on a real analytical workflow.

We will map Dataset Explorer, Workbench, Insight Reports, and Trust Center to one recurring analysis process.