2

I'm reading a text file into \LaTex that has two sets of data. An empty line separates the two sets. I'm trying to typeset one set of data into one table, and the second set into a the same table separated by \hline. Is this possible?

Here is my data: (sorry, I don't understand how to \define it in order to make a true mwe...)

10,100
5,100
2,99
0.85,98
0.425,92
0.25,60
0.15,31
0.075,9

0.0274,9.4 0.0176,9.1 0.0107,8.0 0.007,6.9 0.0059,5.6 0.0031,3.7 0.0013,2.5

Here is my mwe:

\documentclass[letterpaper,11pt]{standalone} 
\usepackage{readarray}

\begin{document} \readarraysepchar{,} \renewcommand\typesetplanesepchar{\\hline} \renewcommand\typesetrowsepchar{\} \renewcommand\typesetcolsepchar{&}

\readdef{../01data/data.csv}\data \readarray\data\array[-,\nrows,\ncols]

\centering \begin{tabular}{c|c} Seive& Passing\ size & \ (mm) & (%)\ \hline \typesetarray\array[1,\nrows,\ncols]\ \hline \typesetarray\array[2,\nrows,\ncols]\ \end{tabular}

\end{document}

Here is the output

Table with the same data twice, and unwanted information after the last cell

The output I get does not show the second set of data, but rather the first set only. In addition, the output displays some data at the end that is unwanted. Can someone please help me understand what is going on?

cmp
  • 57
  • 1
    readarray is not the ideal tool for automating this task, because your two datasets contain differing numbers of rows (8 in the first, 7 in the second). A 3-D array in readarray is required to have equal numbers of rows and columns in each plane. – Steven B. Segletes May 25 '23 at 17:35
  • Thanks Steven, are you able to suggest another approach? – cmp May 26 '23 at 18:26
  • I will give it additional thought, but it would seem that egreg's answer solves the problem fine. – Steven B. Segletes May 26 '23 at 21:26

1 Answers1

4

I don't know how to do it with readarray, but I can with expl3.

\begin{filecontents*}{\jobname.dat}
10,100
5,100
2,99
0.85,98
0.425,92
0.25,60
0.15,31
0.075,9

0.0274,9.4 0.0176,9.1 0.0107,8.0 0.007,6.9 0.0059,5.6 0.0031,3.7 0.0013,2.5 \end{filecontents*}

\documentclass{article}

\ExplSyntaxOn

% user level command \NewDocumentCommand{\tablefromfile}{m} { \cmp_table_from_file:n { #1 } }

% the input stream \ior_new:N \g_cmp_table_from_file_ior % the table body container \tl_new:N \l_cmp_table_from_file_body_tl

% the internal function \cs_new_protected:Nn \cmp_table_from_file:n { % keep changes to \endlinechar local \group_begin: % no end line character \int_set:Nn \endlinechar { -1 } % clear the body \tl_clear:N \l_cmp_table_from_file_body_tl % open the input stream \ior_open:Nn \g_cmp_table_from_file_ior { #1 } % read line by line and populate the body \ior_map_inline:Nn \g_cmp_table_from_file_ior { __cmp_table_from_file_add:n { ##1 } } % close the stream \ior_close:N \g_cmp_table_from_file_ior % print the table \begin{tabular}{c|c} Seive & Passing \ size \ (mm) & (%) \ \hline \tl_use:N \l_cmp_table_from_file_body_tl \end{tabular} % end the group \group_end: }

% the auxiliary function \cs_new_protected:Nn __cmp_table_from_file_add:n { \tl_if_blank:nTF { #1 } {% the line is empty: add \hline \tl_put_right:Nn \l_cmp_table_from_file_body_tl { \hline } } {% the line is not empty: split the comma list in the components % and add them to the body \tl_put_right:Nx \l_cmp_table_from_file_body_tl { \clist_item:nn { #1 } { 1 } & \clist_item:nn { #1 } { 2 } \exp_not:N \ } } } \ExplSyntaxOff

\begin{document}

\tablefromfile{\jobname.dat}

\end{document}

I used filecontents* just in order to avoid clobbering my files, you can use any file name in the argument to \tablefromfile.

enter image description here

egreg
  • 1,121,712
  • This solution is absolutely amazing! Thanks @egreg for all the time you put into it. I have another question for you though: what if I have a third body of data in the same file? How can I insert cells from that third body into parts of my document? For example, with pkg readarray, I can call on an individual cell read from the array, and insert it into a table somewhere in my document. – cmp Apr 15 '23 at 21:28