After I said that the standard doesn't specify what gets put into the input
data when end-of-record is noticed during reading of a file connected for
formatted stream access, Richard Maine commented:
! We say an awful lot about what happens at record boundaries on input.
! See all the stuff on pad=, eor=, and advance=, which directly apply.
! What you have is a plain old ordinary record boundary, the same as any
! other record boundary. You shouldn't be able to tell whether it
! resulted from a "/" edit descriptor or writing a achar(10) in A
! format.
Richard continued:
& True you don't get a translation of the record boundary to achar(10),
& but the standard *DOES* say what happens. It doesn't just leave it
& processor-dependent (or I'm very confused).
If this is all perfectly OK, i.e., new-line in the file doesn't turn into
anything specific in the input data, why do we need the other side of it --
ACHAR(10) in the output data turns into new-line in the file?
Let's have neither or both. The asymmetry in the standard invites criticism.
--
What fraction of Americans believe | Van Snyder
Wrestling is real and NASA is fake? | [log in to unmask]
Any alleged opinions are my own and have not been approved or disapproved
by JPL, CalTech, NASA, Dan Goldin, George Bush, the Pope, or anybody else.
|