根据资讯架构师和多媒体新闻记者 Mirko Lorenz 的说法，资讯新闻学是一个包含了下列这些元素的完整 workflow (工作流程) :将资料纯净化、结构化来“深入资料”，挖掘特定资讯来“过滤资料”，再将资料“视觉化”以做出报导。另外也可以将这个过处理过程扩充加入其他步骤，使其适用于个人层面或是更广的公共层面。
另外一个以结果导向来定义这个词的资料记者暨网络趋势研究者(web strategist)Henk van Ess (2012)认为“资料导向的新闻工作使得记者能够找到尚未被发现的事件，或是透过这套搜寻资料的流程来找到新的角度完成这份报导，也就是运用可行的开放源代码工具对这些资料（可能是任何形式）加工并呈现出来。”Van Ess 认为一些资料导向的工作流程会使其产品“不在好叙事的范畴里”，因为做出来的结果在于强调问题，而非阐述问题。“一个好的资料导向生产流程拥有不同的层面。它不只能够让你找到只对你重要，且个人化的内容，还能够钻到相关的细节里让你能够广览全局。”
Telling stories based on the data is the primary goal. The findings from data can be transformed into any form of journalistic writing. Visualizations can be used to create a clear understanding of a complex situation. Furthermore, elements of storytelling can be used to illustrate what the findings actually mean, from the perspective of someone who is affected by a development. This connection between data and story can be viewed as a "new arc" trying to span the gap between developments that are relevant, but poorly understood, to a story that is verifiable, trustworthy, relevant and easy to remember.
In many investigations the data that can be found might have omissions or is misleading. As one layer of data-driven journalism a critical examination of the data quality is important. In other cases the data might not be public or is not in the right format for further analysis, e.g. is only available in a PDF. Here the process of data-driven journalism can turn into stories about data quality or refusals to provide the data by institutions. As the practice as a whole is in early development steps, examinations of data sources, data sets, data quality and data format are therefore an equally important part of this work.
Based on the perspective of looking deeper into facts and drivers of events, there is a suggested change in media strategies: In this view the idea is to move "from attention to trust". The creation of attention, which has been a pillar of media business models has lost its relevance because reports of new events are often faster distributed via new platforms such as Twitter than through traditional media channels. On the other hand, trust can be understood as a scarce resource. While distributing information is much easier and faster via the web, the abundance of offerings creates costs to verify and check the content of any story create an opportunity. The view to transform media companies into trusted data hubs has been described in an article cross-published in February 2011 on Owni.eu and Nieman Lab.
The process to transform raw data into stories is aking to a refinement and transformation. The main goal is to extract information recipients can act upon. The task of a data journalist is to extract what is hidden. This approach can be applied to almost any context, such as finances, health, environment or other areas of public interest.
In 2011, Paul Bradshaw introduced a model, he called "The Inverted Pyramid of Data Journalism".
In order to achieve this, the process should be split up into several steps. While the steps leading to results can differ, a basic distinction can be made by looking at six phases:
Data can be obtained directly from governmental databases such as data.gov, data.gov.uk and World Bank Data API but also by placing Freedom of Information requests to government agencies; some requests are made and aggregated on websites like the UK's What Do They Know. While there is a worldwide trend towards opening data, there are national differences as to what extend that information is freely available in usable formats. If the data is in a webpage, scrapers are used to generate a spreadsheet. Examples of scrapers are: ScraperWiki, Firefox plugin OutWit Hub or Needlebase (note: Needlebase will be retired June 1, 2012). In other cases OCR-Software can be used to get data from PDFs.
Data can also be created by the public through crowd sourcing, as shown in March 2012 at the Datajournalism Conference in Hamburg by Henk van Ess 
Usually data is not in a format that is easy to visualize. Examples being that there are too many data points or that the rows and columns need to be sorted differently. Another issue is that once investigated many datasets need to be cleaned, structured and transformed. Various open source tools like Google Refine, Data Wrangler and Google Spreadsheets allow uploading, extracting or formatting data.
To visualize data in the form of graphs and charts, applications such as Many Eyes or Tableau Public are available. Yahoo! Pipes and Open Heat Map are examples of tools that enable the creation of maps based on data spreadsheets. The number of options and platforms is expanding. Some new offerings provide options to search, display and embed data, an example being Timetric.
To create meaningful and relevant visualizations, journalists use a growing number of tools. There are by now, several descriptions what to look for and how to do it. Most notable published articles are:
There are different options to publish data and visualizations. A basic approach is to attach the data to single stories, similar to embedding web videos. More advanced concepts allow to create single dossiers, e.g. to display a number of visualizations, articles and links to the data on one page. Often such specials have to be coded individually, as many Content Management Systems are designed to display single posts based on the date of publication.
Providing access to existing data is another phase, which is gaining importance. Think of the sites as "marketplaces" (commercial or not), where datasets can be found easily by others. Especially of the insights for an article where gained from Open Data, journalists should provide a link to the data they used for others to investigate (potentially starting another cycle of interogation, leading to new insights).
Providing access to data and enabling groups to discuss what information could be extracted is the main idea behind Buzzdata, a site using the concepts of social media such as sharing and following to create a community for data investigations.
Other platforms (which can be used both to gather or to distribute data):
A final step of the process is to measure how often a dataset or visualization is viewed.
In the context of data-driven journalism, the extent of such tracking, such as collecting user data or any other information that could be used for marketing reasons or other uses beyond the control of the user, should be viewed as problematic.Template:Says who One newer, non-intrusive option to measure usage is a lightweight tracker called PixelPing. The tracker is the result of a project by ProPublica and DocumentCloud. There is a corresponding back-end solution to collect the data. The software is open source and can be downloaded via GitHub.
There is a growing list of examples how data-driven journalism can be applied:
Other prominent uses of data driven journalism is related to the release by whistle-blower organization WikiLeaks of the Afghan War Diary, a compendium of 91,000 secret military reports covering the war in Afghanistan from 2004 to 2010. Three global broadsheets, namely The Guardian, The New York Times and Der Spiegel, dedicated extensive sections to the documents; The Guardian's reporting included an interactive map pointing out the type, location and casualties caused by 16,000 IED attacks, The New York Times published a selection of reports that permits rolling over underlined text to reveal explanations of military terms, while Der Spiegel provided hybrid visualizations (containing both graphs and maps) on topics like the number deaths related to insurgent bomb attacks.. For the Iraq War logs release, The Guardian used Google Fusion tables to create an interactive map of every incident where someone died, a technique it used again in the England riots of 2011.